Skip Navigation Links
NOAA logo - Click to go to the NOAA homepage National Weather Service NWS logo - Click to go to the NWS homepage
EMC Logo
Navigation Bar Left Cap Home News Organization
Navigation Bar End Cap

         MISSION / VISION    |    About EMC

EMC: Mesoscale Branch FAQ

EMC: Mesoscale Modeling Branch FAQ

Table of Contents

Introduction (Updated 7/06)

This is the Mesoscale Modeling Branch FAQ. We have collected detailed answers to various questions over the past several years and they are presented here under general subject headings. Please remember, this is a dynamic document. We are trying to eliminate outdated items, but you can expect this to be a slow process. We are putting dates on all new sections so you can see when they were last updated. To see when important changes were made in operations which may have rendered an FAQ item obsolete, we recommend you check the MMB change log at

Back to Table of Contents


To interpolate the NDAS first guess to each nests' domain, we use the NEMS Preprocessing System (NPS) codes.  
These codes only interpolate prognostic fields that are used to initialize the model integration 
(T, wind, Q on model levels, soil parameters, surface pressure, etc). 2-m T, 2-m Td, and 10-m wind are diagnostic quantities 
that are not needed to start a model integration, so NPS does not process them. Hence, these fields 
are undefined at 00-h in the nests,as well be many other diagnostic fields. At 1-h and beyond these shelter fields will be written out since 
the model has started integrating.

One may ask why you see valid 00-h 2-m T/Td and 10-m wind in the 12 km parent NAM? In previous versions of the operational NAM, 
what you see at 00-h is NOT an analysis of the 2-m T and 10-m wind at the valid time. It is the last 
NDAS forecast of these fields that is passed through the NAM analysis in the full model restart file, and posted at 00-h. With the October 2011 NAM 
implementation, this has been changed for the 12 km domain. The analysis will now 
analyze 2-m T/q and 10-m wind if there are valid first guess values of these fields from a full model restart file 
written out by the NDAS forecast. Because the nests' first guess are created by NPS (which doesn't process 
shelter fields) the GSI analysis for the NAM nests will not do a 2-m T/Td and 10-m wind analysis. 

It should be noted that if for any reason the NDAS did not run and the 12 km NAM had to be initialized off the GDAS, 
its 00-h 2-m T/Td and 10-m wind would be zero as well.

13 April 2017 update : The 21 March 2017 NAM upgrade 
includes the replacement of the 12-h NDAS with an 6-h data assimilation cycle with hourly analysis updates for the
12 km parent domain and the 3 km CONUS and Alaska nests. Also added wasthe running of a diabatic digital filter
prior to every forecast run (both the 1-h forecast during the data assimilation and the 84-h NAM forecast). These two changes
now allow for the NAM nests 2-m T/Td and 10-m wind to have realistic values at 00-h

Back to Table of Contents


Radar echo top height in the NAM parent and nests is computed at a given grid point if the simulated reflectivity is 18.3 dBz or higher. 
If radar echo top height is undefined (due to lack of simulated reflectivity above 18.3 dBz) the NAM post-processor will mask out
the point with a bit map instead of setting the field to some fixed value (like -9999). Given the small size of the fire weather nest 
domain, it is entirely possible that you could get no or very few simulated echoes for some forecast hours that are above 18.3 dbZ 
that would trigger the calculation of radar echo top, and therefore the entire radar echo top height field would be undefined. When this
happens, the NAM post-processor will not output the field at all. 

Back to Table of Contents


The NAM 12km parent uses (as it always has) the Betts-Miller-Janjic (BMJ) parameterized convection scheme (Janjic, 1990 MWR, 1994 MWR). 
The NAM nests use a highly modified version of the BMJ scheme ("BMJ_DEV") which has moister profiles and is set to have reduced 
convective triggering, which leaves majority of the precipitation 'work' to the grid-scale microphysics (Ferrier). These settings for the
nests in the BMJ_DEV scheme gave better QPF bias than running with explicit convection, and better forecast scores against 
upper-air and surface observations. So you should expect to see different QPF fields between the 12km NAM parent and the nests.

13 April 2017 update:
In the 12 August 2014
NAM upgrade, the "BMJ_DEV" scheme was turned off the all NAM nests except the 6 km Alaska nest. 
In the 21 March 2017
NAM upgrade, the horizontal resolution of the NAM CONUS and Alaska nests was increased to
3 km, and extensive model physics and data assimilation changes were made that improved the NAM nests' precipitation

Back to Table of Contents


The NAM nests (4 km CONUS, 6 km Alaska, 3 km Hawaii/Puerto Rico) run simultaneously as one-way nests inside the NAM-12 km parent 
to 60-h, and are thus available at the same time as the 12 km NAM. The 1.33 km fire weather nest is a one-way nest inside the 
4 km CONUS nest, running to 36-h. These nests get their boundary conditions updated from the parent every time step. The 
nests are initialized from the NAM Data Assimilation System (NDAS) first guess just as the 12 km NAM is initialized. The NAM 
12 km run uses the previous GFS for lateral boundary conditions.

The NAM downscaled grids that are distributed are from the NAM nests from 0-60 h and from the NAM 12 km parent from 63-84 h.

The High-Resolution Window Forecasts (HIRESW) are stand-alone runs of the NEMS-NMMB and the WRF-ARW at 3-4 km resolution. They 
run after the GFS, so they use the current cycle GFS for initial and boundary conditions, except for the CONUS runs which uses 
the Rapid Refresh (RAP) analysis for initial conditions. We run five HIRESW domains, two large domains (CONUS, Alaska) and 
three small domains (Hawaii, Puerto Rico, and Guam) on this schedule.

0000Z : CONUS, Hawaii, Guam
0600Z : Alaska, Puerto Rico
1200Z : CONUS, Hawaii, Guam
1800Z : Alaska, Puerto Rico

On the NCEP  NCEP Model Analysis and Guidance page,
the NAM nests are called "NAM-HIRES", the HIRESW runs are called "HRW-NMM" or "HRW-ARW"

More details on the differences between the NAM CONUS nest and the CONUS NMMB HiResW run are described in this table.

13 Aoril 2017 update: As of 21 March 2017, the horizontal resolution of the NAM CONUS and Alaska nests was increased to 
3 km. Details on the NAM upgrade that included these resolution changes can be found 

Back to Table of Contents


In the NAM, the Noah land model makes a binary choice as to whether falling precipitation is liquid or frozen based 
on the array "SR" (snow ratio) from the NAM microphysics. If SR>0.5, it's frozen precip (i.e. snow), otherwise 
it's rain.  

The density of the new snow as it hits the surface is dependent on the air temperature.  Snow on the ground 
then accumulates and compacts.  If snow is already present (on the ground), the new plus old snow density 
is "homogenized" into a single snowpack depth with uniform density.

Snow is initialized in the model once per day at the start the 06Z NDAS from the National Ice Center snow cover analysis and the AFWA snow depth 
analysis, with assumed initial snow density of 5:1 (snow depth-to-snow water equivalent).

Snow melts if the surface (skin) temperature irises to 0C, based on the surface energy budget (net radiation, 
sensible and latent heat fluxes, soil heat flux), though for partially snow covered model gridbox (<100%), 
some of the surface heating will go into skin temperature increase, surface sensible and latent heat flux, 
and ground heat flux for the non-snow-covered portion of a grid box, so the surface temperature can exceed 
0C during snow melt, and thus low-level (e.g. 2-m) air temperature may exceed 0C by several degrees. 

Back to Table of Contents


The GRIB units for the cloud ice field are *incorrect*.  A sample 
inventory of the CONUS nest field shown here, you'll see many references to the 'CICE - Cloud Ice [kg/m^2]' field at various levels in the vertical and forecast ranges, 
*but the actual units are kg/kg*.  For example: 123 100 mb CICE 6 hour fcst Cloud Ice [kg/m^2] those units should be kg/kg.

This error has been a source of confusion in the past, and it's been around a long time.  Why hasn't it been fixed?  
Well changing things in GRIB (a WMO standard) takes a while, but the main reason is inertia and we don't want to blow existing users out of the water.  
This is especially true for things that have been around for a long time and probably have users who have adapted their applications & their forecast 
use to the field in its present form.  Changing (yes fixing) it, means every one of those users has to make a change to accommodate the change.  
Long lead-time and widespread advance notice is required. 

Note, the column-integrated cloud ice (from the link above) is listed as: 541 entire atmosphere (considered as a single layer) 
TCOLI 6 hour fcst Total Column-Integrated Cloud Ice [kg/m^2]

These units are correct.

Back to Table of Contents


Step 1: Tropical storm relocation

A job pulls in the tropical cyclone bulletins (or "tcvitals") for
for the current cycle from the JTWC, FNMOC, and NCEP/NHC.
These bulletins are then merged, quality controlled, and are
written to the so-called "tcvitals" files. In the GDAS, if the model
representation of the tropical storm can be found in the forecast,
the storm is mechanically relocated in the first guess prior to the 
Global-GSI analysis. If the model storm can not be found, the bogus 
wind observations are used (see step 2). The GDAS first guess with the
relocated storm is also used as background to the t-12 NDAS analysis, 
and it is also used to improve the observation quality control processing 
in the NDAS/NAM and GDAS/GFS. 

Step 2: Synthetic tropical cyclone data

In the NAM/NDAS (for all storms) and in the GFS/GDAS (for weak-intensity 
storms not found in the GDAS forecast) the tcvitals file created in 
Step 1 is used to create synthetic (bogus) wind profile reports throughout
the depth of each storm. In addition, in the NAM/NDAS (for all storms) and
in the GFS/GDAS (for weak-intensity storms not found in the GDAS forecast), 
all mass observations sufficiently "close" to each storm in the tcvitals file
(i.e., within the lat/lon boundary for which wind bogus reports are generated) 
are flagged for non-use by the NAM-GSI analysis and Global-GSI analysis. Also, 
in the NAM/NDAS and GFS/GDAS, dropwindsonde wind data sufficiently "close" to
each storm in the tcvitals file (i.e., within a distance to the storm center of
111 km or three times the radius of maximum surface wind, whichever is larger)
are flagged for non-use by the NAM-GSI analysis and Global-GSI analysis

13 April 2017 update : With the 21 March 2017 NAM upgrade, the tropical cyclone
relocation algothim used in the GFS is being used in the NAM for the 12 km North
American domain only. The relocation is performed on the GDAS first guess used to 
start every NAM 6-h data assimilation (DA) cycle because the relocated GDAS first guess
is not yet available) and on the NAM first guess forecast at the end of the 6-h 
DA cycle. For details go to slides 49-51 in this
of the NAM upgrade.  With this upgrade the synthetic tropical cyclone data is no longer
used in the NAM

Step 3: Sea level pressure obs in tcvitals file

The Global-GSI analysis reads in the estimated minimum sea-level pressure for each storm
in the tcvitals file created in Step 1 and assimilates this pressure. 


The 00Z and 12Z NAM run was first called the "Early" Eta because it replaced the Limited Fine Mesh Model (LFM)
in the NCEP production suite in 1993. For this reason it had to create LFM lookalike files, which had 12-h QPF buckets.
For this reason the 00Z/12Z NAM model integration has had 12-h QPF buckets. 3-h QPF buckets have been added to
selected grids (such as the 40 km, 20 km, and 12 km AWIPS grids) during the post-processing of output grids.

The 06Z and 18Z NAM runs are the descendants of the old 29 km Meso Eta, which ran from 03Z and 15Z out to 33-h 
in 1995. It was initialized with 3-h assimilation spinup from 00z/12z. Because of the 3 hour offset in the starting 
time of the Meso Eta, it had to have 3-h QPF buckets. 

In 1995 the Early Eta was upgraded to from 80 to 48 km, and a 12-h assimilation cycle was added to initialize 
the 00z/12z 48 km Eta forecast. So from Oct 95-Feb 97 NCEP ran a 00Z/12Z 48 km Eta and 03Z/15Z 29 km Eta, which 
were distinct model runs with no connection between the two. A decision was then made to unify the two Eta runs,
so in February 1997 the 32 km Eta was implemented with four forecasts per day at 00z, 03z, 12z, and 18z, all 
from the Eta (now NAM) Data Assimilation System. At that time NCEP had to run a 03Z Eta-32 instead of an 06Z Eta-32
initially because of conflicts with the Medium-Range Forecast (MRF). When NCEP acquired its first IBM  
supercomputer in 2000, the 03Z eta-32 was moved to 06Z. However, because of the development history  
of the 00z/12z and 06z/18z NAM runs, we have to maintain the different QPF bucket lengths for the
foreseeable future.

See this web link for the history of NAM 
and other NCEP Mesoscale model evolution (with links to documentation) since 1993.

Back to Table of Contents


(This answer applies to all non-WRF NMM and all Eta MOdel E-grids too.)
Let all longitude be reckoned positive east.
Let lat_g and lon_g be the geographic lat/lon and lat_r and lon_r be the rotated lat/lon.
Let lat_0 and lon_0 be the geographic lat/lon of the E-grid's central point.  
This is where the grid's rotated equator and rotated prime meridian cross.

First find the rotated lat/lon of any point on the grid for which the geographic lat/lon is known.
Let X = cos(lat_0) cos(lat_e) cos(lon_e - lon_0) + sin(lat_0) sin(lat_e)
Let Y = cos(lat_e) sin(lon_e - lon_0)
Let Z = - sin(lat_0) cos(lat_e) cos(lon_e - lon_0) + cos(lat_0) sin(lat_e)
Then lat_r = atan [ Z / sqrt(X**2 + Y**2) ]
And lon_r = atan [ Y / X ]  (if X < 0, add pi radians)

Now find the geographic lat/lon of any point for which the rotated lat/lon are known.
lat_e = asin [ sin(lat_r) cos(lat_0) + cos(lat_r) sin(lat_0) cos(lon_r) ]
lon_e = lon_0  acos [ cos(lat_r) cos(lon_r) / ( cos(lat_e) cos(lat_0) ) - tan(lat_e) tan(lat_0) ]
In the preceding eqn, use the "+" sign for lon_r > 0 and use the "-" sign for lon_r < 0. 

Back to Table of Contents

WRF Post Processing


_*Radar reflectivity products from the NAM model *_

At a given forecast time, three-dimensional radar reflectivities are
calculated at the native model resolution (vertical and horizontal) from
the algorithm described below using as input the three-dimensional
mixing ratios of rain and precipitation ice (including variable ice
densities to distinguish between snow, graupel, and sleet), and the
two-dimensional convective surface precipitation rates.   The following
two-dimensional radar reflectivity products are derived from the
three-dimensional forecast reflectivities:

   1. Lowest level reflectivity -  calculated at the lowest model level
      above the ground.
   2. Composite reflectivity - maximum anywhere within the atmospheric
   3. 1 km and 4 km AGL reflectivity - interpolated to heights of 1 km
      and 4 km, respectively, above the ground.

Back to Table of Contents


*_Algorithm used to calculate forecast radar reflectivities from the NAM model_*

Simulated radar reflectivity from the operational NAM is in units of dBZ
and it is calculated from the following sequence.

  1. dBZ = 10*LOG10(Ze), where Ze is the equivalent radar reflectivity
     factor.  It is derived from the sixth moment of the size
     distribution for precipitation particles, and it assumes that all
     of the particles are liquid water drops (rain).
  2. Ze = (Ze)rain + (Ze)ice + (Ze)conv. The equivalent radar
     reflectivity factor is the sum of the radar backscatter from rain
     [(Ze)rain], from precipitation-sized ice particles [(Ze)ice], and
     from parameterized convection [(Ze)conv].
  3. (Ze)rain is calculated as the sixth moment of the rain drop size
     distribution.  It is the integral of N(D)*D**6 over all drop
     sizes, where N(D) is the size distribution of rain drops as a
     function of their diameter (D).
  4. (Ze)ice is calculated as the sixth moment of the particle size
     distributions for ice, but with several correction factors.  The
     first is accounting for the reduced backscatter from ice particles
     compared to liquid drops.  Because equivalent radar reflectivity
     assumes that the precipitation is in the form of rain, this
     correction factor is 0.189 (the ratio of the dielectric factor of
     ice divided by that of liquid water).  The second correction
     factor accounts for deviations in ice particle densities from that
     of solid ice, in which the radar backscatter from a large,
     low-density irregular ice particle (e.g., a fluffy aggregate) is
     the same as that from a solid ice sphere of the same mass (e.g., a
     small sleet pellet).
  5. (Ze)conv is the radar backscatter from parameterized subgrid-scale
     cumulus convection.  The algorithm that is employed in the NAM
     assumes the radar reflectivity at the surface is
     (Ze)conv=300*(Rconv)**1.4, where Rconv is the surface rain rate
     (mm/h) derived from the cumulus parameterization.  This so-called
     Z-R relationship is based on the original WSR-88D algorithm.  The
     radar reflectivity is assumed to remain constant with height from
     the surface up to the lowest freezing level, and then it is
     assumed to decrease by 20 dBZ from the freezing level to the top
     of the parameterized convective cloud.

For items 3 and 4 above, the simulated equivalent radar reflectivity
factor (Ze) is calculated from the sixth moment of the size distribution
for rain and ice particles assumed in the microphysics scheme.  Radar
reflectivities calculated from numerical models are highly sensitive to
the size distributions of rain and precipitating ice particles,
particularly since the radar backscatter is proportional to the sixth
power of the particle sizes.  In addition, the mixture of water and ice
within a wet ice particle (e.g., during melting) can have a large influence
on radar backscatter, and this effect is currently ignored in the forecast
reflectivity products.  The contribution from parameterized convection is
highly parameterized and is likely to provide the largest sources of error
to the forecast reflectivities.  For strong local storms, which tend to
occur with greater frequency during the warm season, forecast reflectivites
from the NAM are expected to be less intense and cover larger areas than
the radar observations.

Back to Table of Contents


Assuming you are using the nam.tHHz.awip12FF.tm00 files at where HH is the cycle time = 00, 06, 12 or 18, FF is the forecast hour =
00, 01, 02 ... 24 and YYYYMMDD is year, month & day; you are probably
not seeing them because the new radar reflectivity fields are defined in
NCEP GRIB parameter table version #129
(, and
you are expecting them to be defined in the normal/default Table #2 (  We had to
put them in this alternative table because Table 2 had no room for new
parameters to be added.  According to the GRIB documentation (see you can find
which Parameter Table is being used in octet 4 of the GRIB PDS (Product
Definition Section).

Looking at an inventory of the nam.t12z.awip1212.tm00 file found in, the
reflectivities are at records 171-174

lev 1:12hr fcst:NAve=0
col:12hr fcst:NAve=0
m above gnd:12hr fcst:NAve=0
m above gnd:12hr fcst:NAve=0

record 171 = Lowest model level derived model radar reflectivity
record 172 = Composite radar reflectivity
record 173 = 1000 m above ground derived model radar reflectivity
record 174 = 4000 m above ground derived model radar reflectivity

Note the kpds5 entried above and that in Table 129, derived model
reflectivity is variable #211 and composite reflectivity is variable
#212. In Table 2 these would have been upward short wave and long wave
flux, and this is probably what your existing processing assumed they were.

Back to Table of Contents

Eta Model


09 Feb 98 Eta/EDAS Implementation: (see 32km TPB)

     1 - Eta resolution increased from 48-km to 32-km, from
         38 vertical layers to 45 vertical layers, and from 
         2 to 4 soil layers
     2 - Optimal Interpolation (OI) objective analysis method
         replaced with the 3-D Variational analysis method
     3 - five new observing data types were added to the Eta 4-D
         Data Assimilation System (EDAS): sfc station winds obs 
         over land, VAD wind profiles from NEXRAD, ACARS aircraft
         temperature data, SSM/I oceanic sfc winds, tropical cyclone 
         bogus data
     4 - In the EDAS, fully continuous Eta-based cycling of cloud
         water, turbulent kinetic energy (TKE), and soil moisture/temperature 
         (i.e. Eta first-guess forecast used, no global model first-guess
           used for these fields)
     03 Jun 98 Eta/EDAS Implementation
     1 - a new daily 23-km N.H. NESDIS snowcover/sea-ice analysis is
         implemented in the EDAS
     2 - fully continuous Eta-based cycling of ALL model prognostic
         fields is implemented in the EDAS (i.e. Eta first-guess forecast 
         used for ALL prognostic fields, no global model first-guess used, 
         global model now used only for lateral boundary conditions)
     3 - Eta forecast runs increased from twice per day (00Z and 12Z)
         to four per day (00Z, 03Z, 12Z, 18Z).  The 00Z and 12Z runs remain 
         out to 48 hours.  The 03Z run is out to 33 hours and the 18Z run 
         is out to 30 hours.

     03 Nov 98 Eta/EDAS Y2K/F90 Implementation

     1 - upgrade source code to fortran 90 and Y2K compliance
     2 - fix error in 3DVAR which was excluding nearly all surface data,
         also improve fit to data
     3 - modify output on Eta grid to 2-D indexing, RH calculations now
         consistent with model

OUTPUT INFO: We can NOT make changes to the AWIPS data stream in the short term, so ...

We continue to generate the same output from the early Meso Eta as was available from the early Eta or Meso Eta. [We will also be generating 80-90km NGM-look-alike files to keep various legacy systems (AFOS, FOS and FAX) and orphaned codes alive.] YES, they're horribly degraded in resolution, BUT they all will reflect improvements in the forecast quality due to the increase in resolution of the actual computational model from 48km/38levels to 32km/45levels, you just won't be able to see the higher resolution signals without doing something extra like pulling down stuff (see next paragraph) from the OSO server for display on the SOO/SACs or with PCGRIDDs.

BUFR output will change. New tables will be needed to handle the 4-layer soil output.

IN ADDITION, we will be generating a full complement of output fields on the higher resolution grid #212 (40km), the normal small complement of fields on #215 (20km) and a full complement on the computational resolution grid #221 (32km - this is a NEW grid which covers all of North America and the computational grid of the early Meso Eta on a Lambert conic-conformal grid projection at 32 km resolution with ~97000 grid points). We intend to generate incremental tiles for grid #221, so users would only need to pull down those tiles which cover their area (and a master grid #222 which will be 188km version [every 6th point] of #221 used to process the increments and which can be used to see the BIG PICTURE or whole domain of the early Meso Eta with little bandwidth demands).


  • 00z & 12z runs: #211 and #207 (for Alaska) output PLUS #212, #215 (for ICWF), #221 (with incremental tiles) and #222 188km master grid;
  • 03z & 18z runs: #212 and #215 output PLUS #207(?), #211(?), #221 (with incremental tiles) and #222 188km master grid.

There may be a few output grids generated currently that I've missed, but out intention is to cover ALL existing needs and add the higher resolution, full resolution and full domain output as optional.

Higher resolution runs of the Eta:

we will make occasional 10km runs for various domains depending upon the weather. For example, a west-coast domain has been set up for "El Nino" type storms, and a Great Lakes domain is there for lake effect snow storms.

Regular, daily high-resolution runs will have to wait for the acquisition of our next computer (hopefully to arrive mid FY98 and be operational by early FY99). We intend to push our 4/day runs as close to 10 km as we can. How close we get will depend on the amount of machine our measly $30-40 million will buy us. In the meantime, we will continue to make the nested runs for practice. FY99 is also the timeframe I've heard that will allow changes / expansion of the AWIPS gridded product distribution, so the OSO server connection will have to do until then, I guess.

Back to Table of Contents


The new 'native' Eta grid representation in GRIB follows the 2-D grid indexing that has been used by the model for some time. This is still staggered row by row, wind and mass, but the same number of points are in each row. This means that the number of points on the grid is now IM*JM (rather than IM*JM-JM/2). The "extra" point now included on every other row is not used during integration.

      Staggered, 2-D E-grid indexing
   3   H   V   H   V   H   V   H  (V)

J  2   V   H   V   H   V   H   V  (H)

   1   H   V   H   V   H   V   H  (V)      () points are "extra"
        \ /     \ /     \ /     \ /        included in grid but not
         1       2       3       4         used in integration

      Data Representation Type 203 - Arakawa staggered E-grid on rotated
                                     latitude/longitude grid

       bytes:       definition:
        7-8         Ni -  # points in each row
        9-10        Nj -  # of rows (# of points in Y direction)
       11-13        La1 - latitude of first grid point (millideg)
       14-16        Lo1 - longitude of first grid point (millideg)
         17         Resolution and component flags
       18-20        La2 - central latitude (millideg)
       21-23        Lo2 - central longitude (millideg)
       24-25        Di - Longitudinal direction increment (millideg)
       26-27        Dj - Latitudinal direction increment (millideg)
         28         Scanning mode flags
       29-32        Reserved (zero)

     #192  Arakawa staggered E-grid on rotated latitude/longitude
           grid (Used by the 32 km Eta Model)

           Lo1=215.906E (144.094W)
           Res & Comp. Flag = 1 0 0 0 1 0 0 0
           Lo2=253.000E (107.000W)
           Di=222 millidegrees (exactly 2/9 deg)
           Dj=205 millidegrees (exactly 8/39 deg)
           Scanning Mode= 01000000

     #190  Arakawa staggered E-grid on rotated latitude/longitude
           grid (Used by the 80 km Eta Model, backup if C90 fails)

           Lo1=210.113E (149.887W)
           Res & Comp. Flag = 1 0 0 0 1 0 0 0
           Lo2=249.000E (111.000W)
           Di=577 millidegrees (exactly 15/26 deg)
           Dj=538 millidegrees (exactly 14/26 deg)
           Scanning Mode= 01000000

Back to Table of Contents


The NAM post will change with the operational release of NAM/WRF-NMM. The changes make these terms consistent with the standard GRIB conventions, as well as the NCEP GFS model.
(1) In NAM, the sign of the sensible (in GRIB, "SHTFL") and latent ("LHTFL") heat fluxes is changed so that now these fluxes are defined as positive upwards, e.g. typical daytime fluxes are positive. (Previously these fluxes were defined as negative upwards.) The 2 plots below show the new state for the NAM and the corresponding GFS (sensible heat flux 09-hr forecast valid 21z 04-July-2005). The previous NAM plot would look identical except for a reversed flux sign convention and corresponding color scale; the NAM latent heat flux is similarly changed.

(2) In NAM, the land-mask (in GRIB, "LAND") is re-defined such that LAND=0 over open-sea and sea-ice, and LAND=1 over land. (Previously LAND=0 over open-sea, and LAND=1 over land *and* sea-ice.) The companion sea-ice mask ("ICEC") in the NAM post is correct, with ICEC=0 over land and open-sea, and ICEC=1 over sea-ice. The 2 plots below show the corrected land-mask, and the companion sea-ice plot (e.g. 04-July-2005 for sea-ice).

Back to Table of Contents


EMC has received much feedback this spring (2006) concerning a high bias in 2-meter dew point temperatures in the operational NAM (Eta) model. This is most likely attributable to a bug** in the land-surface physics driver code portion of the NAM that was inadvertently introduced in a May 2005 implementation. This leads to an erroneous excess in surface evaporation over land, particularly during the warm season in non-sparse vegetated regions. For example, the July 2005 mid-day monthly mean (31-day average, 18-hour forecast) surface moisture flux from the NAM (Figure 1; units: W/m2) showed unrealistic and much larger values compared to the NAMX (WRF-NMM) (Figure 2; units: W/m2), especially in the mid-west and plains (negative fluxes upward*). Since the summers of 2004 and 2005 were not markedly different, the July 2004 (prior to the bug) mid-day monthly mean surface moisture flux from the NAM (Figure 3; units: W/m2) is qualitatively similar to the July 2005 NAMX. Also, independent verification with a network of surface flux stations shows better agreement between observations and NAMX versus NAM for the July 2005 monthly mean diurnal cycle of surface moisture flux. This may partially explain the improvement in the low-level dew points in the (spring 2006) NAMX versus NAM described below.

Figure 1
Figure 1
Figure 2
Figure 2
Figure 3
Figure 3

Higher evaporation leads to higher 2-meter dew point temperatures; because of this high bias, forecasters may view the dew points at the first model level above ground to get more reasonable values, e.g. from the actual gridded data (if available), also plotted at: Looking at this issue from the initialization standpoint, consider the case of 15 April 2006 (00z cycle, 00-hour) over the mid-section of the country. The NAM 2-meter dew point analysis (Figure 4; units: deg-F) is excessively moist (too much purple) compared to observed dew point values (Figure 5; units: deg-F; same color coding for observed numbers and model value shades). A large part of the problem, however, is clearly with the diagnosis of dew point temperature at 2 meters, as the dew points at the 1st level above ground are generally several degress lower and give a better overall impression, although still too high compared to observations (Figure 6; units: deg-F).

Figure 4
Figure 4
Figure 5
Figure 5
Figure 6
Figure 6

Even more encouraging, however, the NAMX shows a much better 2-meter dew point analysis (Figure 7; units: deg-F), with values slightly too moist compared to observations, but still a solid improvement over the NAM. At the first level above the ground, the NAMX dew point temperatures offer a very good analysis (Figure 8; units: deg-F), with values again corrected in the proper direction compared to the NAM. In this case, with the exception of some pockets of higher dew points in NW TX and southern AL, the NAMX analysis appears to be a solid improvement over the NAM for most regions in the central US.

Figure 7
Figure 7
Figure 8
Figure 8

The diagnosis of 2-meter dew point temperature using the first model level dew points and surface evaporation will be revisited. In the meantime, it should be noted that 2-meter fields are heavily post-processed and that the 1st level above ground is often the best place to examine dew points (although maybe not over higher terrain; see

The land-surface physics bug** in the operational NAM described above was discovered late in 2005, but a decision was made not to fix it since at the time NAMX implementation was planned for March (still the relatively cool season), and a proper NAM fix would have involved further testing. With limited resources, the focus remained on NAMX (which has no such bug). As described above, the good news is that the near-surface dew point temperatures look much better in NAMX, especially at the first model level above the ground. The NAMX will be implemented operationally on 13 June 2006.

*The sign convention for the surface latent (moisture) and sensible heat fluxes will change with the NAMX implementation where a positive flux is defined as upward (following the standard GRIB convention).

**This bug is NOT in the Noah land-surface model (LSM) itself, rather, it is ONLY in the Eta-model specific driver code for the Noah land-surface model, and has ONLY been in the operational NAM (Eta) since a May 2005 implementation. It is NOT in the WRF-NMM (NAMX) currently being tested to replace NAM/Eta, nor in the WRF-NMM running in the HiResWindow or in WRF-NMM running in the Short Range Ensemble Forecast (SREF) system. It is NOT in any previous version of the Eta Model, including the North American Regional Reanalysis (NARR) and the Regional Climate Data Assimilation System that grew out of the NARR, nor in the Global Forecast System (GFS) model or in any offline/stand-alone (land-model only) version of the Noah LSM.


The Eta has sure been having a rough time of it and especially at the longer ranges. As the events get closer, there has been a definite trend for the Eta to get much better (but the damage may already have been done if folks were led down the 'garden path' by the earlier runs). Let me tell you where we are and give you all a heads-up of where we are going.

Between IBM training and converting codes to the IBM SP, we have indeed been looking very intently at cases of poor performance and there are no shortages of those. The first cases to come in were from the west and they were associated with large differences in the initial analyses between models. Initially, therefore, we had been looking at differences in the data sources going into the model runs, e.g. the fact that the Eta uses GOES precipitable water and the NGM & AVN do not, that the AVN uses radiances from TOVS and GOES 8 only (not GOES 10) while Eta uses temperature retrievals from TOVS and precip. water retrievals from both GOES 8 and 10, dump time issues etc. In a nutshell, we have found no clear data source issues that account for the cases of poor behavior.

What we have been seeing as we looked closely at the analyses over the eastern Pacific was a general lack of balance between the mass and wind corrections. While the balance around 500 mb was pretty good, above and below that level the coherence of the correction fields fell apart. The 3DVAR uses a form of thermal wind as a constraint on corrections and we expected a closer relationship between wind and mass corrections than we were seeing. After looking at the code for an error and not finding any obvious bugs, we verified that the observation that mass-wind balance was best at 500 mb was consistent with the fact that a 'reference level' at 500mb is used in the 3DVAR and the balance would be best at that level. Based on our examination of the cases, the recommendation was for the 3DVAR be re-tuned AGAIN, and we finished doing just that.

The first re-tuning of the 3DVAR was implemented in November 1998 when the Y2K version went into operations with a fix to allow surface obs into the analysis and adjustments (tunings) to improve the fit of the analysis to obs in general and to RAOB's and moisture obs in particular. These changes were in response to complaints about our loose fit to the RAOB data (especially moisture). The 3DVAR (as is true for most objective analysis schemes) can not do both perfect fit to obs and exact adherence to mass-wind balance. The closer fits to RAOB's were achieved at the expense of a looser mass-wind relationship. Where mass and wind are both observed together at multiple levels (i.e. RAOB's), there isn't much of a problem with balance since the obs define it. Where the problem becomes important is where you have only one variable observed or where or when you have only single level observations. This occurs over the oceans all the time. Over land, this happens at all times except the 00z & 12z RAOB times. Faced with single variable or single-level obs, the analysis uses the mass-wind relationship to create increments of the unobserved variables. It appears that relationship is too weak in the current 3DVAR and the resulting analyses are essentially univariate and potentially unbalanced. Without the proper balanced increment, the full effect of the data may not be reflected in the analysis and the subsequent model forecast may be degraded as a result. This has been known since my old OI analysis days when multivariate analyses were found to have better balance and produce better forecasts but were too computationally expensive to operate! Even then it was difficult to walk the fine-line of fitting obs (a must for the Lance Bosart's of the world) and producing a suitably balanced set of corrections that the forecast model didn't reject. I have this tremendous feeling of deja vu!

The new settings for the November 1998 Y2K change were developed & tested in July 1998. We found improved analysis fits to surface data, RAOB's and moisture with little or no impact on forecast quality when tested in an 80km parallel for a three week period in July (all we could afford on the saturated Cray C-90 while crisis testing was going on for the T170 problems). Our focus was on RAOB's and what we saw over the US was the much better fit of the analysis to the RAOBs and a decent balance between mass & wind increments. This balance is a result of RAOB's measuring both mass & wind at multiple levels. The reduced level of coherence between mass & wind increments outside of the US and at off-times was hard to detect in the warm season when increments were relatively small and the subsequent negative forecast impact in the cool-season was never anticipated. Oh, for just a little bit of 20-20 foresight.

From my viewpoint, this 'theory' accounts for what we have been seeing in terms of Eta performance since the November change. The longer range forecasts have been the worst and they would be affected the most by having lousy oceanic analyses. The short term forecasts would be least affected since they are dominated by the nearby RAOB data which produces reasonably balanced initial conditions.

These changes have been made to the operational 3DVAR analysis as of 1200 UTC 13 May 1999. These represent a very small change in the 3DVAR code but one with a very large impact on the quality of not only the analyses but also (ESPECIALLY) the forecasts. I know a lot of you have given up on the Eta Model recently. Please begin to consider the Eta forecasts in a different light now that we have corrected the 3DVAR. I firmly believe that the Eta-32's performance will pick up to a level higher than we saw with its first implementation in February 1998 and vastly superior to its recent track record since November 1998.

Back to Table of Contents


Isolating this change to a single causality is far more difficult since there are several factors which may separately or together have brought on the change. Those which come to mind include:

1.) The recently implemented changes to the ETA model...have we implemented routines of increased sensitivity to the real atmosphere which induced potentially nonlinear modes which were designed to increase the model sensitivity and response...with the unintended consequence of higher order chaotic response in phase space.

2.) A series of storms which have had a southerly origin and been phased to miss the Mexican raobs...resulting in a temporal data void with crucial timing.

3.) A change in some data assimilation or acquisition routine which has increased the inherent variation in the overall database.

4.) The unusual nature of the late winter flow with consistent ridging and strong subtropical jet flow over our area, thus the systems have been originating in a data void area...where the models can not be expected to handle them well.

Any comments or other possibilities?



Apology #1: Both the atmosphere and numerical models are extremely complex and highly non-linear in their behavior. For this reason, I tend to resist the urge to find a single cause for anything.

Apology #2: Personally, I have a hard time connecting solutions to our numerical model problems to the concepts of non-linear instability, modes & phase space and, especially, the dreaded CHAOS thing. I mean, conceptually, they are great and help us all understand complex atmospheric behavior, but when it comes down to helping me decide how to change our numerical model, specifically the Eta Model, I find they aren't much help. Maybe I'm just thick headed. I know I'm bald, for what that's worth.

We have not implemented routines deliberately to increase our sensitivity PER SE. We have implemented new routines or changes to existing ones that simulate real atmospheric behavior in a more accurate manner. This has been a continual process and our change package of 2/18/97 was, for the most part, in this category: incremental improvements. In my opinion, none of those changes individually nor the bundle as a whole is the cause of the behavior you have noticed.

On the other hand, our current system IS more sensitive than it used to be. This comes from our improved ability to simulate the full spectrum of atmospheric behavior through a better model with better physics and numerics. Thus, the model has an increased ability to respond to initial conditions, and, like a marriage, this is for better or worse, for richer or poorer. Here, of course, I'm talking about data sensitivity. Your points 2), 3) and 4) ALL relate to data and YES we are more sensitive to initial conditions and data voids. BUT I wouldn't go back to being less sensitive. Again, there is a whole lot of better and richer that has come along with this heightened sensitivity in the form of improved analysis and data assimilation.

Higher sensitivity has also come through the positive impact of higher grid resolution. The early Eta has been at 48 km / 38 levels since October 1995 and we will upgrade this again this year to 32 km / 45 levels. With the higher resolution, there has been an increased ability to resolve all features in the model and to better predict their evolution with time. Perhaps most visible, however, is the higher level of detail of the model's terrain field and initial conditions. We are hammered by y'all to continue to get as much detail (as in mesoscale structure) into those fields as possible. A particular challenge is getting observations which lie below the model terrain to be included and have a beneficial effect on both the initial conditions and subsequent forecast. We've had more success with this in the 10 km / 60 level Nested Eta than at coarser resolutions.

There is a risk here. Consider the word - resolution. It is related to the ability to resolve. Norm Phillips, the creator of the NGM, always used the rule of thumb that any feature that was not covered by AT LEAST 10 grid-lengths was NOT adequately resolved in a numerical model. Undoubtedly, some features in every initial analysis fall below this level of resolution. Most have NO impact on the subsequent forecast. In addition, we use higher-resolution to justify using observations which at coarser resolutions were considered UN-representative. There is still the risk that noise or observation errors are now indistinguishable from the true signal or level of detail we want to include in our analyses. Quality control of observations is, therefore, a very high priority for us and it is increasingly difficult. Replacing the Optimum Interpolation analysis (boo hoo, that was my code) with the new 3D-variational (3DVAR) scheme this fall will help the analysis and initial conditions.

I can't stress enough nor agree more with your points 2) and 4) relating to storms coming out of the southern stream or through traditionally data sparse areas. Data assimilation techniques can not correct for model forecast errors if there is no data. Our new 3DVAR will allow us to use much more of the satellite data from GOES, which should help especially in those data poor areas.

You may have seen my response (I sent it to SOO_SAC) to Eastern Region recently where we looked at a sequence of model runs which initially called for a snow storm in NYC and then went south, leaving the city high and dry. In that case, ALL our models (Eta, NGM and AVN) did the switch in storm track and scenario AT THE SAME TIME. The UKMET and ECMWF models did likewise. This inconsisitent behavior was traced to initial condition uncertainties in the large scale over the Pacific. Thus, it was not related to anything we had done with or to the Eta model in particular. From a forecasters point of view, I realize it was inconsistent nonetheless.

Another problem arising from the upheaval of moving many codes to the Cray and creating a lot of new UNIX scripts was discovered (and fixed) just this weekend. There have been 11 cases since our change bundle went in 2/18/97 where the EDAS did not complete in time to provide the early Eta with a first guess. In these instances, we use a global guess and a non-EDAS script. It turns out that a problem existed in that script that allowed the possibility of old AVN lateral boundary conditions to be used (we can't tell exactly how old). Therefore, there were as many as 11 cases with a double whammy: inferior first-guess from the global PLUS old boundaries. I strongly believe we would find a high correlation between these runs and some if not all of your observations of inconsistent model performance. Let me repeat that this problem was corrected 4/19.

We are also committed to reducing the frequency with which we have to use the global as a first guess. This related somewhat to the current Standard Operating Procedure of making sure (at all costs) that the NGM runs so y'all get MOS. I will be suggesting that maybe we could take a little more time to finish the EDAS and early Eta so that that model (which is much more accurate than the NGM) NOT be hobbled by inferior initial conditions. Let me know how y'all fell about that.

Back to Table of Contents


In tests with the Eta-10 back in the late 1990's, we modified all the DSP's (deficit saturation pressure; used to construct the reference profiles) based on the surface pressure, which made it easier (too easy) to get convective precip at all at all high-terrain points. We also changed the shallow/deep switch to 2000m (from 290mb).

In what was implemented, we only changed the DSP's for points where the cloud base is above a layer that is 150 mb below the freezing level. The DSP's used to be a constant (-30mb) value, which was too moist and made it very difficult to produce precip. Now they are similar to DSP's for clouds that have bases well below the freezing level; the DSP's at and below the freezing level are set to DSP0 (~ -70 mb depending on cloud efficiency), and are linearly interpolated from freezing level to cloud top, just like in the "normal" clouds. We also set the deep/shallow switch to 200mb*(psfc/1000).

That's probably too much detail. Basically, the difference is that in the Eta-10, we changed ALL the DSP's and went too far (too dry, too easy to get precip) at higher terrain. This produced too much convective precip in the west. In what was implemented, we only changed the "bad" DSP profiles which only got specified if the cloud base was close to the freezing level. These were set to a constant value which was too moist; now they are being set to values that are similar to what they would be if the cloud were over, say, Norman OK. This makes the bias in the west similar to what it is at all other regions around the country. We are eliminating a good piece of the regional variance in bias.

Changes were also made to the DSP's to reduce a coastal bias. Bullseyes of heavy convective precipitation tended to appear right on the Atlantic and Gulf coastlines of the southeastern U.S. Tests indicated that this could be attributed to the difference between the DSP's over land and those over water. Essentially, the profiles were constructed such that it was easier to produce deep convection over land than it is over water. When there was a deep onshore flow with a long fetch, the column of air which attained a sort of "quasi-equilibrium" with the reference profile over water suddenly reached the shore and was given a reference profile that was far more conducive to producing heavy convective rain. To overcome this problem, the profiles were unified to be the same at all points; the sea DSP's were chosen. It should be noted that the original version of the BMJ scheme used the sea profiles at all points. The land values were created to attack a dry bias over the land, but we think that this bias may have been at least partially due to shortcomings with the scheme over the western U.S. as described above.

A more complete discussion can be found in the TPB.

Back to Table of Contents


Yes, we are making everyone's job harder by changing the physics, we don't mean to do that, but its a fact, no doubt about it. But, I would argue that the "rules of thumb" that have been used to forecast parameters that the models haven't done a good job of forecasting in the past were developed to overcome deficiencies and lack of sophistication in the model physics. I don't think we can avoid changing the way we use models once the models start changing and (hopefully!) doing a better job of forecasting physical processes. It's our job to try to understand the impact of these changes and try to show how to use this new guidance in a different, but hopefully better way, to produce better forecasts. I'm sure that we're not doing a very good job at this, but its a new world for us too, the frequency of changes to operational model has just recently started to increase (in a huge way), so we need to change the way we do business in getting this sort of information out to the users of the models. Any ideas on how to improve how we get information out to the field will be much appreciated!!! I hope I'm not sounding too preachy, but we are moving into a new era, and you and I are sort of like explorers venturing into a new frontier, its going to be rough but I think we'll all be better off, the extra effort it takes now will pay off eventually.

Back to Table of Contents


Staggered grid

 H    V    H    V    H

 V    H    V    H    V

 H    V    H    V    H
 V    H    V    H    V    -
 H    V    H    V    H    -

 -dlon-       IM=3, JM=5

H=mass point V=vel point

for this example, GDS octets 18-20 would be 3 (no. of mass points along southernmost row) and GDS octets 21-23 would be 5 (no. of rows in each column). For the 29km grid, there are 181 mass points in the southernmost row and 180 velocity points in that same row for a total of 361 total points. (the next row then has 180 mass and 181 velocity) In the north-south direction for the 29km grid, there are 271 total points in each column (136 mass, 135 velocity in the first column, for example). The east-west resolution of the grid is 0.19444 deg (dlon=7/36) and the north-south resolution is 0.185185 deg (dlat=5/27).

Back to Table of Contents



The shrinkage of the Nest-in-the-West was our fault here at EMC. I've attached a response I recently composed to Peggy Bruehl on this topic. I am sincerely sorry that you will no longer be covered by the Eta-10 nest.

1) Our initial committment to run the Nest-in-the-West was for 6 months so that Western Region could perform a subjective evaluation of these experimental forecasts. The evaluation was completed in May and our 6 month period ended in June. Making this run in near real time has taken a considerable amount of computer time and resources and can not be sustained if we are to make progress towards our 4/day early Meso Eta runs with a large domain 32km/45level version.

2) There were requests to continue the runs. a) there is a southern CA Ozone study taking place until August for which data are being taken for restrospective comparison (also MM5 runs) and very positive feedback has come from the coastal offices indicating the Eta-10 would be of considerable use, b) Arizona will be taking extra soundings for their convective season and will give lots of attention to Eta-10 and its summertime performance, c) Salt Lake City folks have found lots to complain about so we'd like them to continue to keep the heat on us and d) Tom Potter told Ron McPherson it would be very greatly appreciated if we could continue Eta-10 even if on a limited (area) basis.

3) Therefore, rather than turning it off completely, the nest-in-the-west has shrunk to about half its original size. The reduced domain covers California, Nevada, Utah and Arizona. This is a permanent change until the run is removed in late July or August.

4) I'm afraid we won't be able to accommodate SR, CR or even northern WR with real time Eta-10 runs.

5) There is a lot involved in getting our 4/day early Meso Eta runs with a large domain 32km/45level version tested, verified and implemented. We are also very heavily involved in procurement and benchmark activities for the new Class VIII computer (which will allow us to run 4/day early Meso Eta runs with a large domain at 10km/60level version of the model!) If it weren't for all this activity, I would offer to make episodic Eta-10 runs for CR & SR (ER too), but I'm stretched too thin already.

6) I intend to run the Eta-10 in support of Lake Effect snow and other experiments this winter. The domain is TBD but will cover parts of CR & ER.

Geoff DiMego

Back to Table of Contents


Use of the original Betts- Miller scheme was decided upon after considering the various options. Although a large part of the observational data used by Betts and Miller was over tropical oceans, we knew that it was not exclusively so. The scheme appeared appealing to us and we decided to give it a try. This was back in the late '80's. From the time it was incorporated into the model, we began making changes, generalizations, and what we considered to be improvements to the scheme. So many changes were made to the original scheme that we now refer to it as the Betts-Miller-Janjic (BMJ) scheme since Zavisa Janjic was responsible for the vast majority of significant changes. You can read about much of what he did in the May 1994 issue of Monthly Weather Review (pp. 927-945) where he updates descriptions of his changes to the Eta Model's physics package.

We have been computing and carefully watching the objective equitable threat score statistics for all of the models run here at NCEP and the Eta Model's precipitation has consistently produced the superior scores over the continental U.S. year after year. This applies to all seasons of the year. There was concern that the BMJ scheme would 'break down' at high resolution, but it has yet to do so. The BMJ convective scheme can therefore justifiably be called a 'land scheme'. The other schemes involved are the Kuo parameterization in the NGM and the modified Arakawa-Schubert parameterization used in the AVN/MRF. The Kain-Fritsch scheme is being tested in the Eta model runs at NSSL . The Kain-Fritsch scheme will be considered for implementation in the operational Eta at some point.

Again, the BMJ scheme continues to evolve because no parameterization is close to perfect. In past years, we have changed the code to reduce a heavy convective rain bias along the southeast coastline and reduce a dry bias over the western U.S. We are currently looking at ways to improve both the deep and shallow convection with particular attention to the impact of shallow convection on model forecast soundings.

Back to Table of Contents


The convective scheme works as advertised; it produces convective precipitation and modifies the vertical profiles of temperature and moisture. The scheme does not produce any condensate that is resolved at the grid scale. A separate parameterization is used in the radiation scheme to relate cloud optical properties only as a function of convective rainfall rates. The BMJ scheme is not directly involved in the process of producing various forms of condensate (e.g. cloud water, rain, ice), although this is something that is in the longer range plans.

The grid-scale microphysics scheme accounts for microphysical changes of various forms of condensate in the form of cloud droplets, rain, small ice crystals, and a general "precipitation ice" category. Precipitation ice is in the form of snow, graupel, or sleet. Rain and precipitation ice fall, but not all of it falls out instantaneously. Small cloud droplets and ice crystals do not fall, as in the previous grid scheme of Zhao and Carr (1997).

Simple changes in the BMJ scheme are currently being tested to promote the formation of cloud and precipitation condensate that can be more explicity (and hopefully more accurately) resolved by the grid-scale microphysics.

Back to Table of Contents


The NAM forecast is initialized with the NAM Data Assimilation System (NDAS). The NDAS runs a sequence of 4 GSI analyses and 3-h NMMB forecasts, starting at t-12h prior to the NAM forecast start time. At the t-12 h start time of the NDAS, we use atmospheric states from the NCEP Global Data Assimilation System (GDAS), while the NDAS soil states are fully cycled from the previous NDAS run (we call this hybrid setup "partial cycling").

The soil moisture and soil temperature fields in the 4-layer (Noah) land-surface model (LSM) used by the operational NDAS/NAM are continuously cycled without soil moisture nudging (e.g. to some climatology). The fields used now in the NAM are the sole product of model physics and internal NDAS surface forcing (e.g. precipitation and surface radiation). During the forecast portion of the NDAS, the model predicted precipitation that would normally be used as the forcing to the Noah scheme is replaced by the hourly merged Stage II/IV precipitation analyses (go here for details on the Stage II/IV analyses). The more timely Stage II (created directly from the hourly radar and gauge data) is used to supplement the Stage IV (regional analyses from the River Forecast Centers, mosaicked for national coverage).

To address with biases in the Stage II/IV analyses, until January 2010 we used the CPC daily gauge analysis to constrcy a long-term 2-d precipitation surplus/deficit (hourly vs. the more accurate daily analysis). This long-term precip budget is used to make adjustments to the hourly analyses (up to +/- 20% of the original hourly values). As of January 2010, the CPC daily gauge analysis was terminated, so in the operational NDAS we are just making adjustments to the Stage II/IV based on the budget that was in place as of that date. In the parallel NDAS, we are using the Climatology-Calibrated Precipitation Analysis (CCPA, Hou et al, 2012 J. Hydro.) to make adjustments to the long-term precip budget. This will be implemented in operations in the next NAM upgrade sometime in 2014.

Back to Table of Contents



There is some major confusion here about model runs and output grids which I hope is not rampant among all the regions. The following facts, I hope, will begin to clear up some of it. I know the way these data are identified on the OSO SERVER is confusing, so we have to be very careful.

I'm definitely biased, but I don't agree that the value of the Eta forecasts diminishes so rapidly with forecast range as to be useless beyond 18 hours. For example, we do not see a rapid drop off in even our QPF scores. Similarly, Western Region has been quite pleased with the Eta and have not experienced significant drops in value out to 33 hours. Every performance measure we have indicates that the Eta is our best model, and dropping its output would seem to be shortsighted. The #212 (the 40km OUTPUT grid) fields are adequate in representing the 32km Meso Eta forecast fields on isobaric levels, but the surface and precip fields are desirable on #215, the 20 km OUTPUT grid, where the full computational resolution can be reproduced.


1) NCEP runs at 00z and 12z a 22 km / 50 level early Eta from which AWIPS grid #211 80 km output fields are generated for OSO and FOS.

2) NCEP runs at 06z and 18z a 22 km / 50 level Eta from which BOTH of the following are generated as dictated by AWIPS Appendix K: a)full complement of fields on the AWIPS 40 km grid #212 PLUS b)a small set of fields on the AWIPS 20 km grid #215 - not a different model run, just additional output on a higher-resolution grid.

3) on the OSO server, the early Eta output data described by 1) come under directories beginning with us008_gf089_97mmddhh_YxQAx and us008_gf089_97mmddhh_ZxQBx where the 089 indicates the early Eta (48km) is the generating model and the Q indicates output on grid #211.

4) on the OSO server, the off-time Eta output data described by 2a) come under directories beginning with us008_gf085_97mmddhh_YxRAx and us008_gf085_97mmddhh_ZxRBx where the 085 indicates the "Meso Eta" is the generating model and the R indicates output on grid #212.

5) on the OSO server, the off-time Eta output data described by 2b) come under directories beginning with us008_gf085_97mmddhh_YxUAx and us008_gf085_97mmddhh_ZxUBx where the 085 indicates the same "Meso Eta" is the generating model and the U indicates output on grid #215, just a higher resolution output grid for fields like pressure, precipitation, 2 m temperature, 2 m RH, 10 m wind and boundary layer fields.

Back to Table of Contents


A forecaster has noted that in the springtime in the southern Plains, looking at the hourly forecasts from the Meso Eta, the 2m dewpoint "spikes" up at 00Z.

The 2m T and q is derived by assuming a constant flux in the layer from the ground up to the middle of the 1st model level. The 1st model level also shows a jump, but it's not as obvious (+1 deg Td instead of +2 deg) and it doesn't come right back down the next hour.

The surface exchange is driven by the incoming radiation, which is still turned on at 00z. Probably because of the time of year, with the soil relatively moist, more of the energy is going to evaporation than sensible heating. It turns out that at 00z, the sensible heating has turned off and has actually switched to cooling, so the surface temps are starting to cool and the vertical mixing from the surface has shut off. But, there is still fairly strong evaporation, so the moisture is increasing just above the ground. Then, about an hour later, the sun has gone down and there is no more energy driving the evaporation, so it shuts off too, which probably explains why the dew point comes back right after 00z.

Back to Table of Contents

Cold bias in 2-m air temperatures over snowcover in Eta model.

Snow cover effects are an important influence on both daytime and nighttime 2-meter air temperatures in the atmosphere. NCEP/EMC has and continues to make a concerted effort in the Eta model to both properly a) initialize snow cover and snow depth fields and b) model the difficult physics of snow cover effects on the lower atmosphere. We are joined in this effort by key collaborators in the Office of Hydrology (OH) and NESDIS.

We are keenly aware of a 2-meter air temperature cold bias over snow cover in the present operational Eta model. Additionally, we are rapidly getting a handle on the physical causes of the bias and are well along on testing new extensions to the Eta model land-surface physics to eliminate the bulk of this bias. However, operational implementation of these improvements will most likely not take place until Fall of 1999, because adequate testing of the changes has been hampered by a) the current saturation of our CRAY computer mainframes , b) Y2K tests, and c) resources devoted to getting NCEP models running on the new Class VIII IBM/SP Supercomputer being delivered and installed during the current winter.

The 2-m cold bias in the Eta over snow cover has rather different causes, we believe, during daytime and nighttime, so I am going to address them separately.

Daytime Cold Bias

During the day, if temperature and radiation conditions are such that the snow pack physics begin melting the snow, then the snow surface skin temperature is constrained to the freezing point (0 C) throughout the snowmelt process. Hence the overlying 2-m air temperatures in the model rarely rise above about 1-2 C during this melt process. This is in fact reasonable over melting DEEP snowpack, but not over shallow melting snowpack. In the real world, over shallow melting snowpack (say less than about 3-4 inches), the snowpack becomes patchy, with spotty bare ground patches breaking through, and the area-average skin temperature can rise notably above freezing, with overlying 2-m air temperatures going well above freezing (say upper 30's F, 40's F, or even low 50's F in extemely favorable conditions in late spring with clear skies, relatively strong solar radiation, and warm southerly advection). The present physics of the Eta model (and NGM as well) does NOT allow for patchiness in the snow cover. Even for very shallow snowdepth, the Eta model (and NGM) assumes the entire grid box is 100 percent snow covered.

Hence, the surface skin temperature is held to 0 C over melting snow until ALL the snow cover is melted in the model. One can easily observe this model behavior in the Eta model BUFR output, by inspecting the skin temperature, 2-m air temperature, and snowdepth (expressed as water equivalent). When the melting snowdepth reaches zero, say, during a mid-day melt episode, the skin temperature and 2-m air temperature finally begin to rise significantly above freezing.

A secondary problem that contributes to the above is that we feel that our modeled surface albedo over snow cover may be too high, thus reflecting too much surface solar insolation. We reduced the albedo over snow somewhat in operational Eta changes we made in Feb 97, but we feel more reduction may be needed.

Another contributing secondary problem is the physical modeling of ground heat flux (or subsurface heat flux) under the snowpack. This heat flux process under snowpack is difficult to model and is very sensitive to the snowdepth and snowpack density. Often the current physics yields too much ground heat flux, and hence less energy is available to melt the snow and heat the lower atmosphere.

NCEP/EMC and the Office of Hydrology have formulated new snowpack physics to address the above problems, and we have tested the new physics extensively in "uncoupled" mode (wherein the entire land/snow/soil/vegetation physics subsystem is executed separate from the Eta model, using observed low-level atmospheric conditions). A formal journal paper describing these uncoupled tests has recently been accepted for publication (entitled "A parameterization of snowpack and frozen ground intended for use in NCEP Weather and Climate Models", by Victor Koren, John Schaake, Ken Mitchell, and co-authors, to appear in Journal of Geophysical Research). Testing of the new physics in the Eta model will begin next month, but operational implementation will very likely await next Fall.

Aside comment: We have heard some folks wrongly state that the surface skin temperature in the Eta model is always 0 C. This is only true in the melting state. In colder nonmelt conditions, the Eta skin temperature over snow is below freezing, even well below freezing if air temperatures warrant.

Nighttime Cold Bias

The nighttime 2-m temperature cold bias in the Eta model is a more general problem, that occurs over both non-snow covered and snow-covered ground, both in the warm season and the cool season.

This bias is worse in the cool season, and worse yet in the cool season over snow cover. The basic cause of this cool bias is that the Eta model physics yields too little near-surface vertical turbulent mixing during calm nighttime conditions (i.e. stable nigttime low-level temperature inversions, referred to as the stable boundary layer). This problem is greater in the cool season because of the longer nights, the greater tendency for cool season nighttime winds to go calm, and the cooling effect of snow cover yielding even stronger nighttime temperature inversions. Hence, the following conditions tend to yield the greatest cold bias in nighttime 2-meter temperatures: calm winds, clear skies, long night, and snow cover. The nighttime situation has a positive feedback character, because as the low-level inversion sets in, the surface vertical turbulent mixing of heat falls off, which in turn acts to strengthen the inversion, etc. The snow cover exacerbates the feedback, because the snowpack insulates the lower atmosphere from the deep soil heat reservoir, which in the absence of snow can act as a heat source to the lower atmosphere at night.

Ironically, as the vertical resolution of a model increases (i.e. Eta versus NGM or AVN), we feel there is a tendency for the above positive feedback to be enhanced, because the enhanced resolution is able to better resolve stronger, more extreme, shallow nighttime inversions.

Hence, we are embarking on tests of different approaches to the process of nighttime, near-surface, vertical turbulent mixing, to account for such real-world subgrid effects as slope breezes and horizontal turbulent mixing that enhance nighttime turbulent mixing.

Initialization of Snowcover

In an extensive collaboration with NESDIS, NCEP/EMC has invested much effort toward improving the initialization of snow cover in all NCEP NWP models. As a result of this collaboration, in Jan 98, NESDIS began operational production of a new daily, 23-km N. Hemisphere snow cover and sea-ice analysis (known as the Interactive Multi-sensor Snow -- IMS) in the SAB branch of NESDIS. The Eta model began operationally using this analysis on 03 Jun 98. The AVN and NGM models began using the IMS product during the Fall of 98. The IMS is only a snow cover product thus far (not snow depth). So the Eta, NGM, and AVN models continue to use the daily Air Force snowdepth product as the source of snowdepth, but the snow cover of the latter is screened to completely agree with the new daily NESDIS product. (As a future improvement, we are working on incorporating the thrice-weekly 1-km snow-water equivalent analysis available from NOHRSC (NWS/OH) during the months of Jan-May.)

Initial snow analysis errors can obviously be a factor in the cold biases discussed earlier. In particular, if the snow cover analysis has snow present at a particular location, when in fact no snow cover exists, then the cited cold bias tendencies are particularly notable. Forecasters should utilize BUFR output inspection tools (e.g. BUFKIT) to examine the initial Eta snowdepth at their station in a given Eta run. If the Eta is initialized with snow at their location, but no snow or shallow snow exists there in reality, then the forecaster should expect the Eta 2-m temperature forecast to be too cold, especially during daytime hours, and especially if the daytime temperatures are otherwise expected to be above freezing.

Forecasters can also inspect the daily NESDIS IMS snowcover product online by visiting the NESDIS website. (if necessary, scroll to snow analysis section)

Feedback on the NESDIS IMS snow/ice product can be given to the IMS Analyst Team Leader, Tom Baldwin at

The IMS is updated once daily around 5 pm EST each day, and first gets into the 00Z Eta run.

Back to Table of Contents

Eta Post Processor


The sea level pressure found is AFOS uses the Shuell reduction method, (PMSL) which is a function mainly of low level temperature. This is quite different from the Mesinger method, which can be found on AWIPS (EMSL). This uses a horizontal Poisson equation to smoothly interpolate from above-ground to underground temperatures. This results in a smoother field, particular over the Rockies.

PMSL has been added to AWIPS grids for the Eta, but they are not distributed yet. A change request must be filed with with the AWIPS DRG before that can happen.

Some further comments on this subject:

Let me attempt to convey a couple of my thoughts & feelings on sea-level pressure. Remember that the Eta's initial field of sea-level pressure is not a result of performing a sea-level pressure analysis. It, like at all the other forecast output times, is computed diagnostically via the reduction procedure using the free atmosphere values of the state variables in the model. I admit to being unemotional about this because afterall, except over the oceans, this is an UNDERGROUND quantity. I know that some folks are quite emotional about the way this quantity is computed and I have witnessed many (TOO MANY!) hours of debate between advocates of one method or another. I characterize this as 'emotional' to distinguish it from a strictly scientific argument because there is no precise theoretical guidance on the computation of sea-level pressure underground. During the days of mercury barometers, there was considerable diversity of methods for local correction of reported values. Some of that persists even today although ASOS has made things a bit more uniform.

Back to Table of Contents


Here is some information about the so-called supplemental fields from the early Eta and Meso Eta. They have been requested by the ICWF folks and can be used as an alternative starting point (the previous ICWF forecast or a gridded NGM MOS are the other 2 options) for ICWF. The gridded fields can be ingested directly into the ICWF from their Lambert conic-conformal grid. I prefer to call them sensible weather guidance fields from the Eta Model.

These were first generated from the 10km Nested Eta runs performed for last summer's Olympic games and used as an alternative starting point in the ICWF processing for venue specific forecasts. For the Olympics, the fields were on a 10 km grid #218 (Lambert conic conformal). For temperature, the dependence of surface temperatures on elevation came out very nicely. These fields have been generated from the early Eta (on 40 km grid #212) and from the Meso Eta (on 20 km grid #215) for about a year now. These fields have been available on NCEP's NIC server and more recently on the OSO server. DRG RC#2111 put them into the main AWIPS distribution stream.

Here are the fields, which are available every 3 hours throughout the forecast period, and a few comments on how they are generated:

  • Surface wind speed and direction - This is a diagnosed wind from the Eta Model at 10 m above model terrain. Except that it is in the form of direction & speed, it is IDENTICAL to the surface wind coming out of the Eta Model in the form of u & v components. ICWF requested that the surface wind field be in that form.
  • Surface temperature and dew point - These are diagnosed from the Eta Model at 2 m above model terrain. The temperature is IDENTICAL to the surface temperature coming out of the Eta Model now. Except that it is in the form of dew point, this surface moisture field is IDENTICAL to the 2 m specific humidity coming out of the Eta Model. Again, ICWF requested the field be in this format.
  • Maximum temperature - This is the maximum temperature at the 2 m level above model terrain. At 12z, the max temperature is set to the 12z temperature. Between 12z and 00z, the max temperature is reset every 3 hours if the current temperature exceeds the previous max temperature. Between 00z and 12z, the max temperature is unchanged.
  • Minimum temperature - This is the minimum temperature at the 2 m level above model terrain. At 00z, the min temperature is set to the 00z temperature. Between 00z and 12z, the min temperature is reset every 3 hours if the current temperature is less than the previous min temperature. Between 12z and 00z, the min temperature is unchanged.
  • Cloud Cover - This comes straight from looking 'upward' at the Eta Model cloud fields carried/predicted at each Eta layer and is the total cloud 'cover' in percent.
  • QPF - 3 hour total precipitation accumulation (m). This is also IDENTICAL to the current Eta Model output field.
  • Snow - This is the 3 hour snow accumulation (m). This is made by assuming a fixed snow-to-water ratio of 10-1 and using the snow water equivalent carried in the Eta Model land-surface package.
  • PRECIP Probability (%) - This field is a place holder for ICWF and has no value added over the QPF field. The POP field is simply a scaled representation of the 3-hour QPF. Wherever the QPF is less than .01mm, POP is set to 0%; wherever the QPF is greater than 6mm, POP is set to 100% and POP is scaled linearly in between.
  • Thunderstorm Probability (%) - This field is computed from the Eta Model surface based Lifted Index based on a linear fit of a summer's worth (1995) of data for thunderstorm frequency versus LI (Jason Taylor): P(thunder)= -12.76 x LI + 29.33 with values constrained to be between 0 and 100%.
  • Probability of frozen precipitation - This field is NOT a probability. It is set to 100% if the Eta Model decision-tree (Mike Baldwin) for precip type indicates snow or sleet. It is 0%, otherwise.
  • Probability of freezing precipitation - This field is NOT a probability. It is set to 100% if the Eta Model decision-tree (Mike Baldwin) for precip type indicates freezing rain. It is 0%, otherwise.

(Eta Model precip type information has been part of the hourly BUFR soundings for several years, so these last two items are not completely new)

In evaluations of the Olympic Eta-10km temperature, specific humidity and wind speed forecasts against surface reports, the Eta-10 was found to be superior to the Eta-29 Meso Eta forecasts. These comparisons were WITHOUT compensation for the difference between the Eta terrain and that of the stations. The error level would be expected to be higher than that of MOS at the MOS stations, but on the otherhand, MOS would be unable to modulate temperatures from valleys to mountain tops. Undoubtedly, this is the largest cause of perceived systematic error in Eta forecasts versus MOS forecasts (which we don't access). It is our intent to perform this required compensation in the Eta Model post-processor in the very near future, resulting in model forecast fields of the sensible weather elements listed above that will be directly comparable to the observations at surface reporting stations.

Back to Table of Contents


We have hourly output files available via anonymous ftp on a near real-time basis. These files are in BUFR format and contain forecasts of sounding and surface variables. To get them, ftp to log on as anonymous, and cd to /pub/data1/eta/erl.YYMMDD/ bufr.tXXz (where XX is 00,06,12,18) . Inside this directory should be a list of about 1200 files, each one containing data for each hour of a 60-hr eta forecast for a given station location. The files are named bufr.STATID.YYYYMMDDHH where STATID is the 6-digit station id and YYYYMMDDHH is the date and hour of the beginning of the forecast. A station list is available. A guide to unpacking BUFR can be found here . Displays of the BUFR data for every station can be found at the Eta Meteogram web site .

Back to Table of Contents


Two types of CAPE/CIN are currently computed by the Eta model post-processor (both Early Eta and Meso Eta). The actual computation is the same in both cases but what differs is parcel that is lifted. The first computation is listed in a GEMPAK (N-AWIPS) "gdinfo" listing as:

    CAPE    NONE      0
    CINS    NONE      0 
In these fields the model sounding is searched in a 70mb layer above the ground to find the parcel with the highest THETA-E. In GRIB this is labeled as surface CAPE/CINS (level type=1).

The second computation is listed in a GEMPAK (N-AWIPS) "gdinfo" listing as:

    CAPE    PDLY     180       0
    CINS    PDLY     180       0
In these fields the 30mb thick "Boundary Layer" with the highest THETA-E is used to define the parcel. In GRIB this is labeled as pressure depth layer CAPE/CINS (level type=116).

Recall that after integration of a forecast, the Eta model post-processor creates 6 "boundary layers" each 30mb thick that are terrain following (sort of like sigma surfaces) and stacked on one another. In GEMPAK they are labeled as PDLY layers (Pressure Depth LaYer). The first 30mb thick layer (PDLY 30:0) is from p[sfc] to p[sfc]-30mb. These gridded fields are averages for a 30mb layer that can contain as many as 5 Eta levels. The second 30mb layer (PDLY 60:30), third (PDLY 90:60) ... etc. are all averages for terrain following layers stacked upon one another. If the surface pressure were everywhere 1000mb the layers would be 1000-970mb, 970-940mb, 940-910mb, 910-880mb, 880-850mb, and 850-820mb. In reality the layers are terrain following with the bottom layer (30:0) starting at the surface pressure. It is one of these layers (the one with the highest layer-average THETA-E) that serves as the parcel for this "best" CAPE/CIN calculation.

In both cases the vertical integration of the positive or negative buoyancy for the selected parcel is continued through the highest buoyant layer. This assures that the computation will not miss a second layer of (perhaps major) instability that is "capped" by a second (perhaps minor) stable layer. It assures us that the CAPE calculation will consider all unstable layers. It also means that in areas of zero CAPE the CIN will be zero as well. To avoid misinterpretation, areas of zero CAPE should probably be clearly indicated since they are likely regions of strong stability but CIN is undefined.

Some of these descriptions conflict with Russ Treadon's post-processor TPB but do reflect the current methodology employed in the Early Eta and Meso Eta models.

According to Ralph Petersen, PCGRIDDS will make up a label name that is the average of the 2 levels given if more than one level is packed into the GRIB header, so the "best" CAPE will have a level indicator that is around 910, while the "surface" CAPE will have a level indicator of b015.

Back to Table of Contents


Storm-relative helicity is now computed using the Internal Dynamics (ID) method (Bunkers et al, 2000). Prior to March 2000, the model used the Davies and Johns method in which supercell motion is estimated to be 30 degrees to the right and 85% of the mean wind vector for a 850-300 mb mean wind < 15 knots and 75% of the mean wind vector for a 850-300 mb mean wind > 15 knots. This method works very well in situations with "classic" severe-weather hodographs but works poorly in events characterized by atypical hodographs featuring either weak flow or unusual wind profiles (such as northwest flow). The ID method has been found to perform as well as the Davies and Johns method in the classic cases and much better in the atypical cases. The ID method includes an advective component (associated with the 0-6 km pressure-weighted mean wind) and a propagation component (associated with supercell dynamics) that adjusts the motion along a line orthogonal to the 0-6 km mean vertical wind shear vector. A storm motion vector is computed, and this is used to compute helicity. The relevant model fields and WMO parameter ID's are:

    VALUE   PARAMETER                       UNITS
    190     Storm-relative Helicity         m**2/s**2 
    191     U-component Storm Motion        m/s
    192     V-component Storm Motion        m/s
At the National Centers the data will appear in a GEMPAK a "gdinfo" listing of the Meso Eta model output as:
    HLCY    HGHT    3000       0
    USTM    HGHT    6000       0
    VSTM    PRES    6000       0 


Bunkers, M. J., and co-authors, 2000: Predicting supercell motion using a new hodograph technique. Wea. Forecasting, 15, 61-79.

Davies, J. M., and R. H. Johns, 1993: Some wind and instability parameters associated with strong and violent tornadoes. Part I: Wind shear and helicity. The tornado: Its Structure, Dynamics, Prediction, and Hazards, Geophys. Monogr., No. 79, Amer. Geophys. Union, 573-582.

What about the high values of helicity?

The units of helicity are m^2/s^2. The value of 150 is generally considered to be the low threshold for tornado formation. Helicity is basically a measure of the low-level shear, so in high shear situations, such as behind strong cold fronts or ahead of warm fronts, the values will be very large maybe as high as 1500. High negative values are also possible in reverse shear situations.

Back to Table of Contents


There are two types of LI's which can have different labels from Eta Model forecasts, the "best (4 layer) LI" and the "surface LI". Each of these should have a different name/label in PCGRIDDS, GEMPAK, and when it is originally packed into GRIB. I am guessing that the confusion is in the "surface LI" of which there can be two different ways of coming up with a "surface" parcel to lift to 500mb. One way is to use the actual lowest model layer variables, and this is packed as the "1000-500mb LI" where the layer label is given as 1000:500 in GEMPAK, and probably 750 in PCGRIDDS (average of 1000 and 500). The other way is to use the lowest "eta boundary layer" which Russ discussed above. This is a mass-weighted mean of the model variables in the lowest 30mb above the ground, which is then lifted. The layer label on this version is a "pressure depth" layer from 30 to 0, or probably b015 in PCGRIDDS. The "best LI" is computed by taking the each of 6 "eta boundary layers", finding the LI, and taking the lowest of the 6. So it represents the lowest LI in the lowest 180 mb above the ground, while each parcel is a 30mb average of the actual model variables. (it's layer label is a PD layer from 180 to 0, or b090 in PCGRIDDS?)

There has been some recent confusion concerning the lifted index products available on AFOS graphics. The LI fields available for the NGM and Eta models are calculated differently and should, therefore, be compared with the difference in mind.

The LI for the NGM model on AFOS graphics is a "best" lifted index. The least stable of the lowest 4 model sigma layers is lifted to produce the value. This type of calculation is useful in cases in which elevated layers above the surface are less stable than the surface.

The AFOS graphics LI for the Eta model, on the other hand, is a calculation based on the lowest Eta layer, a carry-over from the LFM which gave a surface-based LI. This field, therefore, will not reflect elevated instability. Cases in which a layer just above the surface is very unstable while the surface air remains stable will show a stable LI.

Three different LIs are currently computed by the post-processor of the Eta model. They differ in the depth of the layer that is being lifted. The computations are listed in a GEMPAK (N-AWIPS) "gdinfo" listing as:

PARM     LEVL1       LEVL2       VCORD
LIFT        30           0        PDLY 
LFT4       180           0        PDLY 
LIFT       500        1000        PRES

The first LI calculation lifts the lowest, post-processed, 30 mb thick, and terrain-following "boundary layer". The second lifts each of the lowest six 30 mb deep "boundary layers" and takes the "best" LI. The third value is the LI based on lifting the lowest Eta layer.

The post-processor of the NGM computes 2 LIs:

PARM       LEVL1      LEVL2       VCORD
LFT4        8400       9800        SGMA 
LIFT         500          0        PRES
Here, the first computation lifts the lowest 4 NGM sigma levels and takes the "best" value, while the second one lifts only the lowest sigma layer.

Specifically, users of PCGRIDDS using the front end file can obtain the "best" LI from the post-processed grids. It is listed as:

LIFT       0000

To compute an LI analogous to the boundary layer value, the following function in '93 PCGRIDDS must be used:


For the Eta model, the proper values for the last two terms are B015 and 500, while they are S982 and 500 when using the NGM.

Users of PCGRIDDS using the OSO file will find two different lifted index products:

LIFT         0000
LIFX         0000
The LIFT is the "best" lifted index, while the LIFX is based on the lowest sigma layer for the NGM and the lowest Eta level for the Eta. The same function used to compute a low-level LI already discussed for the front end file also works for the OSO files.

Finally, whereas the AFOS graphics depict differently calculated LI fields, the values included in FOUS messages from the NGM and Eta model runs both reflect the "best" lifted index from the respective models.

Back to Table of Contents


10 meter wind is computed using the Eta Model's first atmospheric layer wind (which is defined at the mid-point of the first layer) and the momentum flux between the ground (skin) and that mid-point. The assumption that the flux is constant across this interval allows us to solve for a wind anywhere in the interval and we solve for a wind at anemometer level or 10 meters (above MODEL terrain).

Back to Table of Contents


1) It is the pressure at the Eta Model's surface (e.g. skin T level, whereas the other SFC stuff is at 2m and sfc winds are at 10m ABOVE Eta Model GROUND LEVEL. That elevation is undoubtedly higher than the elevation of the ASOS station (by about 85-95 m I'd guess based on the 13mb difference). You must take this into account when comparing model surface stuff against observations.

This difference is also reflected in and needs to be accounted for when using the FOUS stuff that's generated from the Eta and NGM, except that the situation is worse there because of all the extra interpolations that are done to generate FOUS (YUK). The BUFR soundings are definitely the way to go because there is NO interpolation involved, the Eta profile from the nearest Eta step is output directly.

We have had a few spectacular successes of making a slight change in specified location for the BUFR site and getting a more representative Eta step profile. This happens normally when there is a strong gradient nearby (e.g. Salt Lake City and Buffalo) so that a small change in distance puts you onto a lower step.

2) For BUFR or FOUS, the interpretation is the same, i.e. PSFC is the model's surface pressure at the model's terrain level (SELV?). Each model Meso Eta, early Eta, NGM, RUC etc., each have their own terrain and they can be vastly different. There is a FOUS TPB that has a Table of the NGM elevations, but the Table for the early Eta is out of date since we went to the 48km grid and will be even more out of date when we (this year) go to 32km (45 levels) in all 4 Eta runs.

Back to Table of Contents

Eta Data Assimilation System (EDAS)


(updated 23 Dec 98)

Geoffrey J. DiMego wrote:

I understand there might still be some uncertainty on what GOES sounder data are being used operationally in NCEP's models, so here are the facts in chronological order:

September 1997: GOES precipitable water data (a NESDIS retrieval product) were introduced in the 3-hourly analyses for the Eta Model. This system covers North America and large parts of the adjacent oceans.

April 1998: GOES precipitable water data began being used in the hourly analyses of the 2nd generation Rapid Update Cycle (RUC2). This system basically covers CONUS & some Canada & Mexico.

June 1998: GOES radiances (a raw data product from NESDIS) began to be used directly in the 6-hourly global analyses. Only data over the oceans is used currently due to the difficulty of dealing with surface emissivity over land.


Precipitable water for 3 layers is used because it best reflects the information content of the 3 moisture channels on the GOES sounder. Using many discrete levels of moisture is not justified.

During spring of 1998, the GOES precipitable water data were turned off until a problem with the sea-surface temperature used in the retrieval could be corrected by NESDIS.

GOES-10 radiance data are being tested for inclusion in the global analysis and the GOES-10 precipitable water data is about to be turned on in the regional runs.

The capability to use radiances directly has been included in the Eta analysis but there is no time to test it on NCEP's current C-90 computer, which is essentially saturated.

The temperature channels don't seem to be adding information that already isn't in the first guess. On the other hand, the moisture channels do add information. This can be seen by inspecting the sounding retrievals at NWS Western Region site:

At that site, the guess and retrieved soundings can be viewed on the same skew-T chart. The differences between the first guess (Eta Model forecast) temperature profile and the retrieved temperature profile are almost always small. The changes in the moisture (dew point) profiles are much larger and more frequent. Unfortunately, moisture changes tend to be short-lived in the atmosphere and in the numerical models. This means there will be little measurable impact with conventional synoptic-scale measures. Our tests indeed have found this to be the case with small but positive impact on precipitation scores over the CONUS. Where the impact is largest, at the local level and short time scales, there is little data (only at the surface) to verify the moisture profiles.

John Derber wrote:

First, I will repeat and support Geoff's comment that the GOES sounder appears to add very little to the temperature profile. The Polar orbiting satellites appear to define the temperature field (at least those modes that are observable from the satellite) quite well without the addition of the moisture data. The moisture field is much more variable that the temperature field, so some additional information from these data is possible.

However, the coverage of the GOES data is so poor it is not clear that the inclusion of the moisture channels help very much. Because of the coverage, ECMWF has decided to make this data very low priority. Instead they are attempting to use the moisture channel from the GOES and METEOSAT imagers rather than the sounders. This channel gives global coverage and have much of the same information upper level moisture information that is contained in the sounder moisture channels. It may make more sense to improve the imager to measure the moisture than the sounder. Also, the visible imagery provides substantial information on the clouds which should be very useful in future developments of the analysis systems. In general, while we do not see much impact of the sounder on our forecasts, we do see considerable promise in the future improved usage of the geostationary imagery.

Back to Table of Contents


At 1200 UTC 13 May 1999 the 3-D variational (3DVAR) analysis used in the operational Eta Model runs (00z, 03z, 12z and 18z) and used in the Eta Data Assimilation System (EDAS) was changed to correct parameters influencing the balance between the analyzed mass and wind fields. In November 1998, these parameters were adjusted in such a way as to draw more closely for radiosonde data but this adjustment had the negative effect of producing much weaker balance between mass and wind analysis fields. This occurred primarily in regions with mostly single level or single type data (e.g. satellite temperature profiles, flight-level aircraft data or satellite cloud-drift winds). The result was frequent poor Eta analyses especially over oceanic regions and poor forecasts especially at longer ranges. The corrected 3DVAR produces more balanced mass and wind analyses at slight expense of the initial fit to radiosonde data. Tests of the corrected 3DVAR analysis produced much higher forecast accuracy.

Further details on this change can be found at :

Back to Table of Contents



(updated 8 Nov 98)

         New NESDIS daily 23-km snowcover/sea-ice anal
         A new daily, updated, 23-km, N. 
         Hemisphere snowcover and sea-ice analysis.  Its produced 
         daily in the afternoon by a trained NESDIS satellite analyst 
         on an interactive dual-screen workstation, wherein multiple 
         data sources are called up, inspected, overlaid, differenced, 
         etc by the analyst and he/she modifies the previous day's 
         product to produce a current update.  The chief data sources 
         are 1) time-looped geostationary VIS and IR imagery, 2) AVHRR
         polar orbiter VIS and IR imagery, 3) observer reported snowdepths, 
         4) NOHRSC snowdepth analyses, 5) SSM/I retrievals of snowcover
         and sea ice, and 6) National Ice Center (NIC) sea ice analyses.
         INITIAL CONDITIONS FOR THE ETA MODEL (compared with the former 
         weekly 190-km NESDIS sea-ice cover used by the Eta Model).

         The combination of the Eta model resolution increase from 
         48-km to 32-km, combined with the upgrade of the sea-ice 
         product from weekly 190-km to daily 23-km will greatly 
         increase the quality of the sea-ice analysis in the Eta Model.
         The above product will also define the initial snowcover
         in the Eta Model, but NOT the initial snowdepth.  The initial 
         snowdepth will still be defined by the daily 47-km Air Force 
         snowdepth product.  If the NESDIS product has snowcover, but the 
         Air Force product does not, then we assign a default snowdepth of 
         2 inches (the view here in this choice of default is that if the 
         snow was deep, the Air Force analysis would not likely have 
         missed it).
         You can view the new NESDIS, daily, 23-km N. Hemisphere product 
         on the internet here.
         Once there, click "archive".  Note this online archive goes
         back over a year.  There are some missing days, as ops production 
         did not start until early Jan 98.  Scroll to bottom to access
         GIF image for most current day.  Note you can choose from 
         either U.S. GIF or N.H. GIF.  NESDIS did not start producing
         the U.S. GIF online until 03 Mar 1998.  Unfortunately, there was 
         not much Great Lakes sea ice this past cool season, so the U.S. 
         GIFs since 03 Mar 1998 are not the best candidates for showing 
         the vastly improved realism of Great Lakes sea ice in this new 
         product.  It is not possible to see Great Lakes ice well in the 
         N.H. GIFs.  Note that the current color scheme did not start 
         until Julian Day 86 of 1997.
         The daily update of the new NESDIS product finishes around 
         23Z on a given day, but given the satellite imagery that is 
         typically used, its realistic valid time is around mid-day 
         or about 18Z.  So we use it to update the Eta snowcover in
         the 18Z Eta analysis.  During the Eta forecast, the Eta physics 
         allows the initial snowcover to change in response to the
         Eta forecast of snowmelt or snowfall.

1) NGM and Eta get initial snow from daily Air Force analysis (512x512 1/8 Bedient ~47km). The NGM uses a yes/no (binary) snow field which is held fixed throughout the forecast. The Eta uses the actual snow depth and allows this field to evolve (melt or accumulate) during the forecast.

2) The Global Data Assimilation System (GDAS or sometimes called FNL) updates its snow depth field once a week from a NESDIS product on a 2 deg x 2 deg grid. The snow field evolves within the GDAS between the weekly updates. The AVN and MRF use the GDAS snow depth as initial condition and allow it to evolve during the forecast period.

3) Viewing the initial snow field from the NGM is problematic, BUT not all of the blame is ours. The NGM code says that the snow depth output field is set to 0 for no snow and set to 100 (no units specified) for snow. Somewhere in the bowels of that godawful code, the value that is output which was 100 becomes 100,000! It's as if it felt the need to convert meters to millimeters, but there should be no need since both GRIB and the old NMC Office Note 84 have units of meters for the snow depth. Anyway, we continue to search the NGM code for the place where that multiplication is performed and intend to remove it.

To make matters worse, fields of snow depth are generated by an NCO code via bi-quadratic interpolation from the NGM native grid #104 (NGM super C grid) to AWIPS grids #202, #207 and #211. The use of bi-quadratic interpolation of binary values of 0 and 100,000 results in some pretty large NEGATIVE values (like -19256) and smears the precise location of the no snow/snow line. Folks have pretty much given up trying to figure out what is in there after seeing unpacked values ranging from -19256 to over 100,000!

4) Currently, the Eta puts out snow water equivalent (using 10 inches of snow for 1 inch of water equivalent). We are looking into what it would put out if we ask for snow depth.

Back to Table of Contents


Link to the OSO and NIC Info Page

FILENAME                      Source    Content
----------------              ------    ------------
us008_gf083_96041612_Yxmnx     NCEP
        083                             ETA FCST model # - 80 km resolution
        085                             Meso-ETA FCST model # 30 km resolution
        089                             ETA forecast model # - 48 km resolution
                     Y                  domestic GRIB designator
                     Z                  domestic GRIB designator
                       m=N              207/Regional Alaska data
                       m=Q              211/Regional CONUS
                       m=R              212/Regional CONUS
                       m=U              215/Regional CONUS

                     Y                  domestic GRIB designator
                        n=A             00 forecast hour (for 'Y' Grib only)
                        n=B             06
                        n=C             12
                        n=D             18
                        n=E             24
                        n=F             30
                        n=G             36
                        n=H             42
                        n=I             48 forecast hour

                     Z                  domestic GRIB designator
                        n=B             03 forecast hour (for 'Z' Grib only)
                        n=E             09
                        n=H             15
                        n=K             21
                        n=L             27
                        n=O             33 forecast hour
Contact: Dan Starosta 301 713 0877

In addition to this, the GRIB documentation discusses the WMO Header which is used to make up the filenames on the server. This incomplete discussion is in Appendix A of ON 388 and could help to determine what the other model files are.

Back to Table of Contents


All of NCEP's models use their 'vertical momentum' equation for ultimately getting vertical velocity. There is no kinematic approach used so there is no need to apply an O'Brien correction. The third equation of motion involves the material derivative of the vertical coordinate, i.e. d(sigma or eta)/dt - what is written quite often as sigma-dot or eta-dot. Through the hydrostatic assumption, this material derivative equation is reduced to the model's continuity equation involving integrated mass-flux divergence and the surface pressure tendency. The values of sigma-dot or eta-dot get converted to a conventional vertical velocity in the models' post-processor.

Back to Table of Contents


1) We have lots of room for improvement of the models and their accuracy in simulation atmospheric behavior. These things will keep us (I hope) employed indefinitely. We have lots of room for improvement of model resolution. This will come with more powerful computers. There is lots of room for improvement of initial conditions (which is where I sense you are coming from) BOTH in technique and in observational basis. Improvements (better forecasts) will be limited if we can't get better obs, but the other things will each generate increments of improvement in forecast accuracy. Depending on where in the forecast you look, however, that increment will be different. If we can't get better obs, then analysis and assimilation and model improvements will be limited by the initial error (though that error may grow more slowly). You may not think very much of our Eta Model QPF's but we have come a LONG way since June of 1993. We've come this far because of progress in ALL of the above areas. So, while I agree that low-level moisture is critical, I think progress is possible even without it. With Four Dimensional Data Assimilation, EVERYTHING contributes to improved QPF, not just the low level moisture. I'm not advocating any action on this and was adamant about NOT reducing RAOBs (not that anyone listens to me).

2) I'm hopeful on several fronts. I don't think the network of RAOBs will be decimated TOO BADLY. We need more than just those 12-hour obs anyway, but they are a great anchor for low level moisture. A water-vapor sensor is now being tested on a couple of UPS aircraft. Potentially, if deployed on our domestic carriers, we could be getting a whole bunch of ACARS moisture data from aircraft on every ascent / descent. GPS ground receivers can infer column precipitable water, and SSM/I and GOES provide this type of information now. The trade off is relatively high temporal frequency versus NO vertical resolution and not particularly high horizontal resolution. New Variational techniques give us the ability to get maximum benefit from these types of data. Direct use of brightness temperatures (radiances) with 3D-VAR eliminates the errors in retrieval methods that, for moisture, rapidly swamp the signal. There is a Radio-Acoustic Sounder that provides low level thermodynamic info and can be added to the Profilers, but I don't expect a lot of these for quite some time and they are NOISY. Currently we make little use of cloud and precipitation observations in the EDAS, BUT that is ABOUT to change. While this information may not be specific to low-level content, it is still quite useful in correcting the structure of the complete moisture/condensate field in general. Finally, (I may have left something out), detailed information from the WSR-88D (in my opinion, the ONLY mesoscale observing system we have) will be used in our 3D and 4D-VAR assimilations and that information will be even better/more useful if they go with a polaraized strategy in the future. We already use radial velocity in our 10km 3D-VAR analysis for the Nest-in-the-west (also used it for Olympics run). Reflectivity and VIL will come next.

Back to Table of Contents


The decision of when exactly to turn off the NGM and its MOS is being considered at NWS HQ at this time. It will not likely happen until the off-time AVN MOS has been accepted. The Meteorological Development Lab (MDL which replaced/absorbed the old TDL) has seen the AVN MOS provide a big improvement (relative to the NGM MOS) with the PoP's, and are working on improvements with the max/min so that the AVN MOS will be consistently better (right now, its 24 and 36 hour projections are of equivalent quality). The on-time AVN MOS comes out later than the NGM MOS, so that is why I feel any decision will wait for the off-time AVN MOS to be available. Some forecasters still feel the NGM forecasts are worth looking at, and NWS HQ will have to take that into account when they make their decision. Update : The NGM was terminated on March 3, 2009

Back to Table of Contents


Snow in the NAM and GFS is updated once per day using analysis data from the National Ice Center's Interactive Multisensor Snow and Ice Mapping System (IMS) and the Air Force Weather Agency's SNODEP model. The IMS product is a snow cover analysis (yes/no flag) at 4 km resolution. It is northern hemisphere only. The AFWA product is a global physical snow depth analysis at 23 km. Both products are produced daily. The IMS and AFWA datasets are interpolated to the NAM and GFS physics grids using a "budget" interpolation method in order to preserve total water volume. If IMS indicates snow cover, then the model snow depth is set to 5 cm or the AFWA depth, whichever is greater. If IMS indicates no snow cover, the model depth is set to zero regardless of the AFWA depth. The IMS data is used as a 'check' on the AFWA data because it has more accurate coverage, especially in mountain ranges (because of its higher resolution). For GFS points in the southern hemisphere, the AFWA depth is used as is. Both models' prognostic equations are written in terms of snow liquid equivalent. So the analyzed depth is converted using a 5:1 ratio for NAM and a 10:1 ratio for GFS. The NAM is updated at the T-minus 12-hour point of the 06Z NDAS cycle. The GFS is updated at the 00Z point of the GFS and GDAS cycles. Snow depth is a prognostic field, so between update times the models will accumulate and melt it.

Back to Table of Contents

Western Region TA's

Back to Table of Contents

From SREF guidance on AWIPS, I see two consecutive 3 hr POP's of 3% yet the 6 hr POP for the identical period is 30%. How could this be?

  • If it's indeed referring to 3h-apcp vs. 6h-apcp, then the probability difference between them could reach as high as 100%. E.g., all 21 members predict precip = 0.006" in the period of t to t+3 and =0.005" in the period of t+3 to t+6, so the POP (>=0.01") is 0% for both 3-hr periods but the POP (>=0.01") over the 6h period, t to t+6, is 100% because 0.006+0.005>0.01".

Back to Table of Contents

What is the implication of the 2m Td cap fix in the new SREF_v7.0?

  • Several SREF ARW members (p01, n03, n05 and p06 in particular) occasionally produce dew-point temperature values at the 2m level (2m Td) that are erroneously high, with some values exceeding 90F. These spikes in the 2m Td occur in environments in which actual values are usually near or exceeding 80F. Three methods to eliminate these errors were tested by EMC and SPC. It was decided that simply capping 2m Td values at 28C (82.4F) was the best approach, and this cap has been added to the SREF code. It preserves good domain-averaged performance (i.e. no impact on the overall performance of 2m Td) and corrects the spatial structure of the 2m Td field while taking care of these occasional high value spikes. The results of the tests were presented at the EMC implementation briefing to the NCEP director on Sept. 25, 2015 and can be viewed at the following location (after Oct. 20, 2015):

    Note that 2m Td is strictly a diagnostic parameter computed in the model post processor, and these erroneous spikes do not impact any other forecast parameters. In other words, the issue is only pertinent to the 2m Td; there is no effect on the boundary layer or, more specifically, on any model level. The impact of the spikes on the 2m Td ensemble products such as mean and probability is expected to be minimal, since the majority of the 26 members do not have this issue. The newest version of the ARW model addresses this issue with a new way of computing 2m Td and will be used in the next SREF upgrade.

Back to Table of Contents