EMC: Mesoscale Model/Analysis Systems FAQ

Table of Contents

Introduction

This is the EMC Mesoscale Model/Analysis Systems FAQ.  We have collected detailed answers to various questions over 
the past several years and they are presented here under general subject headings.  Please remember, this is a dynamic 
document.  We are trying to eliminate outdated items, but you can expect this to be a slow process.  We are putting 
dates on all new sections so you can see when they were last updated.  To see when important changes were made in 
operations which may have rendered an FAQ item obsolete, we recommend you check the 
 Mesoscale Model/Analysis Systems Change log.

Back to Table of Contents

WHY ARE THE 2-M TEMPERATURE/DEW POINT AND 10-M WIND ALL ZERO AT 00-H FOR THE NAM NESTED DOMAINS? (18 Oct 11)


To interpolate the NDAS first guess to each nests' domain, we use the NEMS Preprocessing System (NPS) codes.  
These codes only interpolate prognostic fields that are used to initialize the model integration 
(T, wind, Q on model levels, soil parameters, surface pressure, etc). 2-m T, 2-m Td, and 10-m wind are diagnostic 
quantities that are not needed to start a model integration, so NPS does not process them. Hence, these fields 
are undefined at 00-h in the nests,as well be many other diagnostic fields. At 1-h and beyond these shelter fields 
will be written out since the model has started integrating. 

One may ask why you see valid 00-h 2-m T/Td and 10-m wind in the 12 km parent NAM? In previous versions of the operational 
NAM, what you see at 00-h is NOT an analysis of the 2-m T and 10-m wind at the valid time. It is the last 
NDAS forecast of these fields that is passed through the NAM analysis in the full model restart file, and posted at 00-h. 
With the October 2011 NAM implementation, this has been changed for the 12 km domain. The analysis will now 
analyze 2-m T/q and 10-m wind if there are valid first guess values of these fields from a full model restart file 
written out by the NDAS forecast. Because the nests' first guess are created by NPS (which doesn't process 
shelter fields) the GSI analysis for the NAM nests will not do a 2-m T/Td and 10-m wind analysis. 

It should be noted that if for any reason the NDAS did not run and the 12 km NAM had to be initialized off the GDAS, 
its 00-h 2-m T/Td and 10-m wind would be zero as well.

13 April 2017 update : The 21 March 2017 NAM upgrade 
includes the replacement of the 12-h NDAS with an 6-h data assimilation cycle with hourly analysis updates for the
12 km parent domain and the 3 km CONUS and Alaska nests. Also added wasthe running of a diabatic digital filter
prior to every forecast run (both the 1-h forecast during the data assimilation and the 84-h NAM forecast). These two changes
now allow for the NAM nests 2-m T/Td and 10-m wind to have realistic values at 00-h

Back to Table of Contents

WHY IS RADAR ECHO TOP HEIGHT SOMETIMES NOT OUTPUT FROM THE NAM FIRE WEATHER NEST? (26 Oct 11)


Radar echo top height in the NAM parent and nests is computed at a given grid point if the simulated reflectivity 
is 18.3 dBz or higher. If radar echo top height is undefined (due to lack of simulated reflectivity above 18.3 dBz) 
the NAM post-processor will mask out the point with a bit map instead of setting the field to some fixed value
Given the small size of the fire weather nest domain, it is possible that you could get no or very few simulated 
echoes for some forecast hours that are above 18.3 dbZ that would trigger the calculation of radar echo top, and  
therefore the entire radar echo top height field would be undefined. When this happens, the NAM post-processor will 
not output the field at all.

Back to Table of Contents

WHY IS THE QPF DIFFERENT BETWEEN THE 12-KM NAM PARENT DOMAIN AND THE NAM CONUS NEST? (4 Nov 2011, updated 13 April 2017)


The NAM 12km parent uses the Betts-Miller-Janjic (BMJ) parameterized convection scheme (Janjic, 1990 MWR, 1994 MWR). 
The NAM nests use a highly modified version of the BMJ scheme ("BMJ_DEV") which has moister profiles and is set to have reduced 
convective triggering, which leaves majority of the precipitation 'work' to the grid-scale microphysics (Ferrier). 
These settings for the nests in the BMJ_DEV scheme gave better QPF bias than running with explicit convection, and better
forecast scores against upper-air and surface observations. So you should expect to see different QPF fields between 
the 12km NAM parent and the NAM nests.

13 April 2017 update:
In the 12 August 2014
NAM upgrade, the "BMJ_DEV" scheme was turned off the all NAM nests except the 6 km Alaska nest. 
In the 21 March 2017
NAM upgrade, the horizontal resolution of the NAM CONUS and Alaska nests was increased to
3 km, and extensive model physics and data assimilation changes were made that improved the NAM nests' precipitation
forecasts.

Back to Table of Contents

GRIB2 LIBRARY FIX NEEDED TO RUN WRF MODEL OFF THE NEW NAMv4 GRIDS (4 May 2017)


One of the post-processing changes made as part of the 21 March 2017 NAMv4 upgrade was to create GRIB2 
output grids direct from the post-processing codes, rather than generate GRIB1 files and convert them 
to GRIB2 with JPEG2000 compression. Generally, direct generation of JPEG2000 GRIB2 files from the 
post-processing is slow and when tested in the NAM caused unacceptable delays in gridded product 
generation and delivery, especially for the larger domains (12 km NAM parent and 3 km CONUS/Alaska nests). 
For this reason, the GRIB2 compression type for each NAM domain in the v4 upgrade was set as:

JPEG2000: Hawaii, Puerto Rico and Fire Weather nests
Complex packing with 2nd order spatial differencing: CONUS nest, Alaska nest and all output from 12km NAM parent domain

Unfortuately there is a known bug in earlier versions of NCEP's GRIB2 (g2) Fortran library that occassionaly 
causes decoding errors when reading GRIB2 files with complex packing. After the NAM implementation on 21 March 2017, 
several groups running the WRF code using NAM input reported failures in the WRF Preprocessing codes (WPS) due to 
the complex packing GRIB2 deconding error described above. To alleviate this problem, a modified g2 library routine 
that fixes the decoding error was provided to the WPS code manager at UCAR, and a patch was released for WRF v3.8.1 
which can be downloaded here.

Users can access all of NCEP's production libraries at 

www.nco.ncep.noaa.gov/pmb/codes/nwprod/lib/

For the Fortran g2 library, v2.5.2 is the first version that has the GRIB2 decoding fix described above, v3.1.0 
is the latest version with fixes for all known problems as of this date. 

Back to Table of Contents

WHAT IS THE DIFFERENCE BETWEEN THE NAM CONUS NEST AND THE CONUS NMMB HIRESW RUN? (4 Dec 14)


The NAM nests (4 km CONUS, 6 km Alaska, 3 km Hawaii/Puerto Rico) run simultaneously as one-way nests inside the NAM-12 km parent 
to 60-h, and are thus available at the same time as the 12 km NAM. The 1.33 km fire weather nest is a one-way nest inside the 
4 km CONUS nest, running to 36-h. These nests get their boundary conditions updated from the parent every time step. The 
nests are initialized from the NAM Data Assimilation System (NDAS) first guess just as the 12 km NAM is initialized. The NAM 
12 km run uses the previous GFS for lateral boundary conditions.

The NAM downscaled grids that are distributed are from the NAM nests from 0-60 h and from the NAM 12 km parent from 63-84 h.

The High-Resolution Window Forecasts (HIRESW) are stand-alone runs of the NEMS-NMMB and the WRF-ARW at 3-4 km resolution. They 
run after the GFS, so they use the current cycle GFS for initial and boundary conditions, except for the CONUS runs which uses 
the Rapid Refresh (RAP) analysis for initial conditions. We run five HIRESW domains, two large domains (CONUS, Alaska) and 
three small domains (Hawaii, Puerto Rico, and Guam) on this schedule.

0000Z : CONUS, Hawaii, Guam
0600Z : Alaska, Puerto Rico
1200Z : CONUS, Hawaii, Guam
1800Z : Alaska, Puerto Rico

On the NCEP  NCEP Model Analysis and Guidance page,
the NAM nests are called "NAM-HIRES", the HIRESW runs are called "HRW-NMM" or "HRW-ARW"

More details on the differences between the NAM CONUS nest and the CONUS NMMB HiResW run are described in this table.

13 April 2017 update: As of 21 March 2017, the horizontal resolution of the NAM CONUS and Alaska nests was increased to 
3 km. Details on the NAM upgrade that included these resolution changes can be found 
here.

Back to Table of Contents

HOW DOES THE LAND-SURFACE MODEL IN THE NAM COMPUTE ACCUMULATED SNOW? (15 Nov 11)


In the NAM, the Noah land model makes a binary choice as to whether falling precipitation is liquid or frozen based 
on the array "SR" (snow ratio) from the NAM microphysics. If SR>0.5, it's frozen precip (i.e. snow), otherwise 
it's rain.  

The density of the new snow as it hits the surface is dependent on the air temperature.  Snow on the ground 
then accumulates and compacts.  If snow is already present (on the ground), the new plus old snow density 
is "homogenized" into a single snowpack depth with uniform density.

Snow is initialized in the model once per day at the start the 06Z NDAS from the National Ice Center snow cover 
analysis and the AFWA snow depth analysis, with assumed initial snow density of 5:1 (snow depth-to-snow 
water equivalent). 

Snow melts if the surface (skin) temperature irises to 0C, based on the surface energy budget (net radiation, 
sensible and latent heat fluxes, soil heat flux), though for partially snow covered model gridbox (<100%), 
some of the surface heating will go into skin temperature increase, surface sensible and latent heat flux, 
and ground heat flux for the non-snow-covered portion of a grid box, so the surface temperature can exceed 
0C during snow melt, and thus low-level (e.g. 2-m) air temperature may exceed 0C by several degrees. 

Back to Table of Contents

WHAT ARE THE UNITS OF CLOUD ICE FIELD COMING OUT OF NAM? (13 April 2012)


The GRIB units for the cloud ice field are *incorrect*.  A sample 
inventory of the CONUS nest field shown here, you'll see many references to the 'CICE - Cloud Ice [kg/m^2]' field 
at various levels in the vertical and forecast ranges, *but the actual units are kg/kg*.  For example: 
"123 100 mb CICE 6 hour fcst Cloud Ice [kg/m^2]" those units should be kg/kg.

This error has been a source of confusion in the past, and it's been around a long time.  Why hasn't it been fixed?  
Well changing things in GRIB (a WMO standard) takes a while, but the main reason is inertia and we don't want to 
blow existing users out of the water. his is especially true for things that have been around for a long time and  
probably have users who have adapted their applications & their forecasts to use the field in its present form. 
Changing/fixing this means every one of those users has to make a change to accommodate the change.  
Long lead-time and widespread advance notice is required. 

Note, the column-integrated cloud ice (from the link above) is listed as: 
"541 entire atmosphere (considered as a single layer) TCOLI 6 hour fcst Total Column-Integrated Cloud Ice [kg/m^2]:
These units are correct.

Back to Table of Contents

HOW DOES THE NAM AND GFS INITIALIZE TROPICAL STORM LOCATION? (25 May 2012, updated 13 April 2017)


Step 1: Tropical storm relocation

A job pulls in the tropical cyclone bulletins (or "tcvitals") for
for the current cycle from the JTWC, FNMOC, and NCEP/NHC.
These bulletins are then merged, quality controlled, and are
written to the so-called "tcvitals" files. In the GDAS, if the model
representation of the tropical storm can be found in the forecast,
the storm is mechanically relocated in the first guess prior to the 
Global-GSI analysis. If the model storm can not be found, the bogus 
wind observations are used (see step 2). The GDAS first guess with the
relocated storm is also used as background to the t-12 NDAS analysis, 
and it is also used to improve the observation quality control processing 
in the NDAS/NAM and GDAS/GFS. 

Step 2: Synthetic tropical cyclone data

In the NAM/NDAS (for all storms) and in the GFS/GDAS (for weak-intensity 
storms not found in the GDAS forecast) the tcvitals file created in 
Step 1 is used to create synthetic (bogus) wind profile reports throughout
the depth of each storm. In addition, in the NAM/NDAS (for all storms) and
in the GFS/GDAS (for weak-intensity storms not found in the GDAS forecast), 
all mass observations sufficiently "close" to each storm in the tcvitals file
(i.e., within the lat/lon boundary for which wind bogus reports are generated) 
are flagged for non-use by the NAM-GSI analysis and Global-GSI analysis. Also, 
in the NAM/NDAS and GFS/GDAS, dropwindsonde wind data sufficiently "close" to
each storm in the tcvitals file (i.e., within a distance to the storm center of
111 km or three times the radius of maximum surface wind, whichever is larger)
are flagged for non-use by the NAM-GSI analysis and Global-GSI analysis

13 April 2017 update : With the 21 March 2017 NAM upgrade, the tropical cyclone
relocation algothim used in the GFS is being used in the NAM for the 12 km North
American domain only. The relocation is performed on the GDAS first guess used to 
start every NAM 6-h data assimilation (DA) cycle because the relocated GDAS first guess
is not yet available) and on the NAM first guess forecast at the end of the 6-h 
DA cycle. For details go to slides 49-51 in this
 overview 
of the NAM upgrade.  With this upgrade the synthetic tropical cyclone data is no longer
used in the NAM

Step 3: Sea level pressure obs in tcvitals file

The Global-GSI analysis reads in the estimated minimum sea-level pressure for each storm
in the tcvitals file created in Step 1 and assimilates this pressure. 

WHY DOES THE 00Z/12Z NAM MODEL RUN HAVE 12-H QPF BUCKETS AND THE 06Z/18Z NAM MODEL RUN HAVE 3-H QPF BUCKETS? (30 Nov 12)


The 00Z and 12Z NAM run was first called the "Early" Eta because it replaced the Limited Fine Mesh Model (LFM)
in the NCEP production suite in 1993. For this reason it had to create LFM lookalike files, which had 12-h QPF buckets.
For this reason the 00Z/12Z NAM model integration has had 12-h QPF buckets. 3-h QPF buckets have been added to
selected grids (such as the 40 km, 20 km, and 12 km AWIPS grids) during the post-processing of output grids.

The 06Z and 18Z NAM runs are the descendants of the old 29 km Meso Eta, which ran from 03Z and 15Z out to 33-h 
in 1995. It was initialized with 3-h assimilation spinup from 00z/12z. Because of the 3 hour offset in the starting 
time of the Meso Eta, it had to have 3-h QPF buckets. 

In 1995 the Early Eta was upgraded to from 80 to 48 km, and a 12-h assimilation cycle was added to initialize 
the 00z/12z 48 km Eta forecast. So from Oct 95-Feb 97 NCEP ran a 00Z/12Z 48 km Eta and 03Z/15Z 29 km Eta, which 
were distinct model runs with no connection between the two. A decision was then made to unify the two Eta runs,
so in February 1997 the 32 km Eta was implemented with four forecasts per day at 00z, 03z, 12z, and 18z, all 
from the Eta (now NAM) Data Assimilation System. At that time NCEP had to run a 03Z Eta-32 instead of an 06Z Eta-32
initially because of conflicts with the Medium-Range Forecast (MRF). When NCEP acquired its first IBM  
supercomputer in 2000, the 03Z eta-32 was moved to 06Z. However, because of the development history  
of the 00z/12z and 06z/18z NAM runs, we have to maintain the different QPF bucket lengths for the
foreseeable future.

See this web link for the history of NAM 
and other NCEP Mesoscale model evolution (with links to documentation) since 1993.

Back to Table of Contents

WHAT ARE THE EQUATIONS FOR MOVING BETWEEN GEOGRAPHIC AND ROTATED LAT/LON ON THE NMMB's B-grid?

(This answer applies to all non-WRF NMM and all Eta MOdel E-grids too.)
Let all longitude be reckoned positive east.
Let lat_g and lon_g be the geographic lat/lon and lat_r and lon_r be the rotated lat/lon.
Let lat_0 and lon_0 be the geographic lat/lon of the E-grid's central point.  
This is where the grid's rotated equator and rotated prime meridian cross.

First find the rotated lat/lon of any point on the grid for which the geographic lat/lon is known.
Let X = cos(lat_0) cos(lat_e) cos(lon_e - lon_0) + sin(lat_0) sin(lat_e)
Let Y = cos(lat_e) sin(lon_e - lon_0)
Let Z = - sin(lat_0) cos(lat_e) cos(lon_e - lon_0) + cos(lat_0) sin(lat_e)
Then lat_r = atan [ Z / sqrt(X**2 + Y**2) ]
And lon_r = atan [ Y / X ]  (if X < 0, add pi radians)

Now find the geographic lat/lon of any point for which the rotated lat/lon are known.
lat_e = asin [ sin(lat_r) cos(lat_0) + cos(lat_r) sin(lat_0) cos(lon_r) ]
lon_e = lon_0  acos [ cos(lat_r) cos(lon_r) / ( cos(lat_e) cos(lat_0) ) - tan(lat_e) tan(lat_0) ]
In the preceding eqn, use the "+" sign for lon_r > 0 and use the "-" sign for lon_r < 0. 

Back to Table of Contents

WRF Post Processing

WHAT SIMULATED RADAR REFLECTIVITY PRODUCTS ARE PRODUCED BY THE NAM? (12 July 06)


_*Radar reflectivity products from the NAM model *_

At a given forecast time, three-dimensional radar reflectivities are
calculated at the native model resolution (vertical and horizontal) from
the algorithm described below using as input the three-dimensional
mixing ratios of rain and precipitation ice (including variable ice
densities to distinguish between snow, graupel, and sleet), and the
two-dimensional convective surface precipitation rates.   The following
two-dimensional radar reflectivity products are derived from the
three-dimensional forecast reflectivities:

   1. Lowest level reflectivity -  calculated at the lowest model level
      above the ground.
   2. Composite reflectivity - maximum anywhere within the atmospheric
      column.
   3. 1 km and 4 km AGL reflectivity - interpolated to heights of 1 km
      and 4 km, respectively, above the ground.


Back to Table of Contents

HOW ARE THE SIMULATED REFLECTIVITIES COMPUTED? (12 July 06)


*_Algorithm used to calculate forecast radar reflectivities from the NAM model_*

Simulated radar reflectivity from the operational NAM is in units of dBZ
and it is calculated from the following sequence.

  1. dBZ = 10*LOG10(Ze), where Ze is the equivalent radar reflectivity
     factor.  It is derived from the sixth moment of the size
     distribution for precipitation particles, and it assumes that all
     of the particles are liquid water drops (rain).
  2. Ze = (Ze)rain + (Ze)ice + (Ze)conv. The equivalent radar
     reflectivity factor is the sum of the radar backscatter from rain
     [(Ze)rain], from precipitation-sized ice particles [(Ze)ice], and
     from parameterized convection [(Ze)conv].
  3. (Ze)rain is calculated as the sixth moment of the rain drop size
     distribution.  It is the integral of N(D)*D**6 over all drop
     sizes, where N(D) is the size distribution of rain drops as a
     function of their diameter (D).
  4. (Ze)ice is calculated as the sixth moment of the particle size
     distributions for ice, but with several correction factors.  The
     first is accounting for the reduced backscatter from ice particles
     compared to liquid drops.  Because equivalent radar reflectivity
     assumes that the precipitation is in the form of rain, this
     correction factor is 0.189 (the ratio of the dielectric factor of
     ice divided by that of liquid water).  The second correction
     factor accounts for deviations in ice particle densities from that
     of solid ice, in which the radar backscatter from a large,
     low-density irregular ice particle (e.g., a fluffy aggregate) is
     the same as that from a solid ice sphere of the same mass (e.g., a
     small sleet pellet).
  5. (Ze)conv is the radar backscatter from parameterized subgrid-scale
     cumulus convection.  The algorithm that is employed in the NAM
     assumes the radar reflectivity at the surface is
     (Ze)conv=300*(Rconv)**1.4, where Rconv is the surface rain rate
     (mm/h) derived from the cumulus parameterization.  This so-called
     Z-R relationship is based on the original WSR-88D algorithm.  The
     radar reflectivity is assumed to remain constant with height from
     the surface up to the lowest freezing level, and then it is
     assumed to decrease by 20 dBZ from the freezing level to the top
     of the parameterized convective cloud.


For items 3 and 4 above, the simulated equivalent radar reflectivity
factor (Ze) is calculated from the sixth moment of the size distribution
for rain and ice particles assumed in the microphysics scheme.  Radar
reflectivities calculated from numerical models are highly sensitive to
the size distributions of rain and precipitating ice particles,
particularly since the radar backscatter is proportional to the sixth
power of the particle sizes.  In addition, the mixture of water and ice
within a wet ice particle (e.g., during melting) can have a large influence
on radar backscatter, and this effect is currently ignored in the forecast
reflectivity products.  The contribution from parameterized convection is
highly parameterized and is likely to provide the largest sources of error
to the forecast reflectivities.  For strong local storms, which tend to
occur with greater frequency during the warm season, forecast reflectivites
from the NAM are expected to be less intense and cover larger areas than
the radar observations.

Back to Table of Contents

WHY CAN'T I FIND THE RADAR PRODUCTS IN MY AWIP12 GRIB OUTPUT FILE? (14 July 06)

Assuming you are using the nam.tHHz.awip12FF.tm00 files at
ftp://ftpprd.ncep.noaa.gov/pub/data/nccf/com/nam/prod/nam.YYYYMMDD/ where HH is the cycle time = 00, 06, 12 or 18, FF is the forecast hour =
00, 01, 02 ... 24 and YYYYMMDD is year, month & day; you are probably
not seeing them because the new radar reflectivity fields are defined in
NCEP GRIB parameter table version #129
(http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html#TABLE129), and
you are expecting them to be defined in the normal/default Table #2 (http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html).  We had to
put them in this alternative table because Table 2 had no room for new
parameters to be added.  According to the GRIB documentation (see http://www.nco.ncep.noaa.gov/pmb/docs/on388/section1.html) you can find
which Parameter Table is being used in octet 4 of the GRIB PDS (Product
Definition Section).

Looking at an inventory of the nam.t12z.awip1212.tm00 file found in
ftp://ftpprd.ncep.noaa.gov/pub/data/nccf/com/nam/prod/nam.20060713, the
reflectivities are at records 171-174

171:47614082:d=06071312:REFD:kpds5=211:kpds6=109:kpds7=1:TR=0:P1=12:P2=0:TimeU=1:hybrid
lev 1:12hr fcst:NAve=0
172:47975514:d=06071312:REFC:kpds5=212:kpds6=200:kpds7=0:TR=0:P1=12:P2=0:TimeU=1:atmos
col:12hr fcst:NAve=0
173:48336946:d=06071312:REFD:kpds5=211:kpds6=105:kpds7=1000:TR=0:P1=12:P2=0:TimeU=1:1000
m above gnd:12hr fcst:NAve=0
174:48698378:d=06071312:REFD:kpds5=211:kpds6=105:kpds7=4000:TR=0:P1=12:P2=0:TimeU=1:4000
m above gnd:12hr fcst:NAve=0

record 171 = Lowest model level derived model radar reflectivity
record 172 = Composite radar reflectivity
record 173 = 1000 m above ground derived model radar reflectivity
record 174 = 4000 m above ground derived model radar reflectivity

Note the kpds5 entried above and that in Table 129, derived model
reflectivity is variable #211 and composite reflectivity is variable
#212. In Table 2 these would have been upward short wave and long wave
flux, and this is probably what your existing processing assumed they were.

Back to Table of Contents

HOW IS THE SOIL MOISTURE INITIALIZED IN THE NAM? (08/2001, updated 09/2013)

The NAM forecast is initialized with the NAM Data Assimilation System (NDAS). The NDAS runs a sequence of 4 GSI analyses and 3-h NMMB forecasts, starting at t-12h prior to the NAM forecast start time. At the t-12 h start time of the NDAS, we use atmospheric states from the NCEP Global Data Assimilation System (GDAS), while the NDAS soil states are fully cycled from the previous NDAS run (we call this hybrid setup "partial cycling").

The soil moisture and soil temperature fields in the 4-layer (Noah) land-surface model (LSM) used by the operational NDAS/NAM are continuously cycled without soil moisture nudging (e.g. to some climatology). The fields used now in the NAM are the sole product of model physics and internal NDAS surface forcing (e.g. precipitation and surface radiation). During the forecast portion of the NDAS, the model predicted precipitation that would normally be used as the forcing to the Noah scheme is replaced by the hourly merged Stage II/IV precipitation analyses (go here for details on the Stage II/IV analyses). The more timely Stage II (created directly from the hourly radar and gauge data) is used to supplement the Stage IV (regional analyses from the River Forecast Centers, mosaicked for national coverage).

To address with biases in the Stage II/IV analyses, until January 2010 we used the CPC daily gauge analysis to constrcy a long-term 2-d precipitation surplus/deficit (hourly vs. the more accurate daily analysis). This long-term precip budget is used to make adjustments to the hourly analyses (up to +/- 20% of the original hourly values). As of January 2010, the CPC daily gauge analysis was terminated, so in the operational NDAS we are just making adjustments to the Stage II/IV based on the budget that was in place as of that date. In the parallel NDAS, we are using the Climatology-Calibrated Precipitation Analysis (CCPA, Hou et al, 2012 J. Hydro.) to make adjustments to the long-term precip budget. This will be implemented in operations in the next NAM upgrade sometime in 2014.

Back to Table of Contents

Post Processor

WHY IS THE SLP DIFFERENT BETWEEN AWIPS AND AFOS?

The sea level pressure found is AFOS uses the Shuell reduction method, (PMSL) which is a function mainly of low level temperature. This is quite different from the Mesinger method, which can be found on AWIPS (EMSL). This uses a horizontal Poisson equation to smoothly interpolate from above-ground to underground temperatures. This results in a smoother field, particular over the Rockies.

PMSL has been added to AWIPS grids for the Eta, but they are not distributed yet. A change request must be filed with with the AWIPS DRG before that can happen.

Some further comments on this subject:

Let me attempt to convey a couple of my thoughts & feelings on sea-level pressure. Remember that the Eta's initial field of sea-level pressure is not a result of performing a sea-level pressure analysis. It, like at all the other forecast output times, is computed diagnostically via the reduction procedure using the free atmosphere values of the state variables in the model. I admit to being unemotional about this because afterall, except over the oceans, this is an UNDERGROUND quantity. I know that some folks are quite emotional about the way this quantity is computed and I have witnessed many (TOO MANY!) hours of debate between advocates of one method or another. I characterize this as 'emotional' to distinguish it from a strictly scientific argument because there is no precise theoretical guidance on the computation of sea-level pressure underground. During the days of mercury barometers, there was considerable diversity of methods for local correction of reported values. Some of that persists even today although ASOS has made things a bit more uniform.

Back to Table of Contents

INFO ON ICWF PRODUCTS FROM ETA

Here is some information about the so-called supplemental fields from the early Eta and Meso Eta. They have been requested by the ICWF folks and can be used as an alternative starting point (the previous ICWF forecast or a gridded NGM MOS are the other 2 options) for ICWF. The gridded fields can be ingested directly into the ICWF from their Lambert conic-conformal grid. I prefer to call them sensible weather guidance fields from the Eta Model.

These were first generated from the 10km Nested Eta runs performed for last summer's Olympic games and used as an alternative starting point in the ICWF processing for venue specific forecasts. For the Olympics, the fields were on a 10 km grid #218 (Lambert conic conformal). For temperature, the dependence of surface temperatures on elevation came out very nicely. These fields have been generated from the early Eta (on 40 km grid #212) and from the Meso Eta (on 20 km grid #215) for about a year now. These fields have been available on NCEP's NIC server and more recently on the OSO server. DRG RC#2111 put them into the main AWIPS distribution stream.

Here are the fields, which are available every 3 hours throughout the forecast period, and a few comments on how they are generated:

(Eta Model precip type information has been part of the hourly BUFR soundings for several years, so these last two items are not completely new)

In evaluations of the Olympic Eta-10km temperature, specific humidity and wind speed forecasts against surface reports, the Eta-10 was found to be superior to the Eta-29 Meso Eta forecasts. These comparisons were WITHOUT compensation for the difference between the Eta terrain and that of the stations. The error level would be expected to be higher than that of MOS at the MOS stations, but on the otherhand, MOS would be unable to modulate temperatures from valleys to mountain tops. Undoubtedly, this is the largest cause of perceived systematic error in Eta forecasts versus MOS forecasts (which we don't access). It is our intent to perform this required compensation in the Eta Model post-processor in the very near future, resulting in model forecast fields of the sensible weather elements listed above that will be directly comparable to the observations at surface reporting stations.

Back to Table of Contents

HOW DO I GET THE BUFR OUTPUT FILES? (10/01)

We have hourly output files available via anonymous ftp on a near real-time basis. These files are in BUFR format and contain forecasts of sounding and surface variables. To get them, ftp to ftp.ncep.noaa.gov log on as anonymous, and cd to /pub/data1/eta/erl.YYMMDD/ bufr.tXXz (where XX is 00,06,12,18) . Inside this directory should be a list of about 1200 files, each one containing data for each hour of a 60-hr eta forecast for a given station location. The files are named bufr.STATID.YYYYMMDDHH where STATID is the 6-digit station id and YYYYMMDDHH is the date and hour of the beginning of the forecast. A station list is available. A guide to unpacking BUFR can be found here . Displays of the BUFR data for every station can be found at the Eta Meteogram web site .

Back to Table of Contents

HOW ARE CAPE VALUES COMPUTED? (11/01)

Two types of CAPE/CIN are currently computed by the Eta model post-processor (both Early Eta and Meso Eta). The actual computation is the same in both cases but what differs is parcel that is lifted. The first computation is listed in a GEMPAK (N-AWIPS) "gdinfo" listing as:

    PARM    VCORD   LEVL1   LEVL2
    
    CAPE    NONE      0
    CINS    NONE      0 
    
In these fields the model sounding is searched in a 70mb layer above the ground to find the parcel with the highest THETA-E. In GRIB this is labeled as surface CAPE/CINS (level type=1).

The second computation is listed in a GEMPAK (N-AWIPS) "gdinfo" listing as:

    PARM    VCORD   LEVL1   LEVL2
    
    CAPE    PDLY     180       0
    CINS    PDLY     180       0
    
In these fields the 30mb thick "Boundary Layer" with the highest THETA-E is used to define the parcel. In GRIB this is labeled as pressure depth layer CAPE/CINS (level type=116).

Recall that after integration of a forecast, the Eta model post-processor creates 6 "boundary layers" each 30mb thick that are terrain following (sort of like sigma surfaces) and stacked on one another. In GEMPAK they are labeled as PDLY layers (Pressure Depth LaYer). The first 30mb thick layer (PDLY 30:0) is from p[sfc] to p[sfc]-30mb. These gridded fields are averages for a 30mb layer that can contain as many as 5 Eta levels. The second 30mb layer (PDLY 60:30), third (PDLY 90:60) ... etc. are all averages for terrain following layers stacked upon one another. If the surface pressure were everywhere 1000mb the layers would be 1000-970mb, 970-940mb, 940-910mb, 910-880mb, 880-850mb, and 850-820mb. In reality the layers are terrain following with the bottom layer (30:0) starting at the surface pressure. It is one of these layers (the one with the highest layer-average THETA-E) that serves as the parcel for this "best" CAPE/CIN calculation.

In both cases the vertical integration of the positive or negative buoyancy for the selected parcel is continued through the highest buoyant layer. This assures that the computation will not miss a second layer of (perhaps major) instability that is "capped" by a second (perhaps minor) stable layer. It assures us that the CAPE calculation will consider all unstable layers. It also means that in areas of zero CAPE the CIN will be zero as well. To avoid misinterpretation, areas of zero CAPE should probably be clearly indicated since they are likely regions of strong stability but CIN is undefined.

Some of these descriptions conflict with Russ Treadon's post-processor TPB but do reflect the current methodology employed in the Early Eta and Meso Eta models.

According to Ralph Petersen, PCGRIDDS will make up a label name that is the average of the 2 levels given if more than one level is packed into the GRIB header, so the "best" CAPE will have a level indicator that is around 910, while the "surface" CAPE will have a level indicator of b015.

Back to Table of Contents

HOW IS HELICITY COMPUTED? (10/01)

Storm-relative helicity is now computed using the Internal Dynamics (ID) method (Bunkers et al, 2000). Prior to March 2000, the model used the Davies and Johns method in which supercell motion is estimated to be 30 degrees to the right and 85% of the mean wind vector for a 850-300 mb mean wind < 15 knots and 75% of the mean wind vector for a 850-300 mb mean wind > 15 knots. This method works very well in situations with "classic" severe-weather hodographs but works poorly in events characterized by atypical hodographs featuring either weak flow or unusual wind profiles (such as northwest flow). The ID method has been found to perform as well as the Davies and Johns method in the classic cases and much better in the atypical cases. The ID method includes an advective component (associated with the 0-6 km pressure-weighted mean wind) and a propagation component (associated with supercell dynamics) that adjusts the motion along a line orthogonal to the 0-6 km mean vertical wind shear vector. A storm motion vector is computed, and this is used to compute helicity. The relevant model fields and WMO parameter ID's are:

    VALUE   PARAMETER                       UNITS
    190     Storm-relative Helicity         m**2/s**2 
    191     U-component Storm Motion        m/s
    192     V-component Storm Motion        m/s
    
At the National Centers the data will appear in a GEMPAK a "gdinfo" listing of the Meso Eta model output as:
    PARM    VCORD   LEVL1   LEVL2
    
    HLCY    HGHT    3000       0
    USTM    HGHT    6000       0
    VSTM    PRES    6000       0 
    

References:

Bunkers, M. J., and co-authors, 2000: Predicting supercell motion using a new hodograph technique. Wea. Forecasting, 15, 61-79.

Davies, J. M., and R. H. Johns, 1993: Some wind and instability parameters associated with strong and violent tornadoes. Part I: Wind shear and helicity. The tornado: Its Structure, Dynamics, Prediction, and Hazards, Geophys. Monogr., No. 79, Amer. Geophys. Union, 573-582.

What about the high values of helicity?

The units of helicity are m^2/s^2. The value of 150 is generally considered to be the low threshold for tornado formation. Helicity is basically a measure of the low-level shear, so in high shear situations, such as behind strong cold fronts or ahead of warm fronts, the values will be very large maybe as high as 1500. High negative values are also possible in reverse shear situations.

Back to Table of Contents

WHAT ABOUT LIFTED INDEX?

There are two types of LI's which can have different labels from Eta Model forecasts, the "best (4 layer) LI" and the "surface LI". Each of these should have a different name/label in PCGRIDDS, GEMPAK, and when it is originally packed into GRIB. I am guessing that the confusion is in the "surface LI" of which there can be two different ways of coming up with a "surface" parcel to lift to 500mb. One way is to use the actual lowest model layer variables, and this is packed as the "1000-500mb LI" where the layer label is given as 1000:500 in GEMPAK, and probably 750 in PCGRIDDS (average of 1000 and 500). The other way is to use the lowest "eta boundary layer" which Russ discussed above. This is a mass-weighted mean of the model variables in the lowest 30mb above the ground, which is then lifted. The layer label on this version is a "pressure depth" layer from 30 to 0, or probably b015 in PCGRIDDS. The "best LI" is computed by taking the each of 6 "eta boundary layers", finding the LI, and taking the lowest of the 6. So it represents the lowest LI in the lowest 180 mb above the ground, while each parcel is a 30mb average of the actual model variables. (it's layer label is a PD layer from 180 to 0, or b090 in PCGRIDDS?)

There has been some recent confusion concerning the lifted index products available on AFOS graphics. The LI fields available for the NGM and Eta models are calculated differently and should, therefore, be compared with the difference in mind.

The LI for the NGM model on AFOS graphics is a "best" lifted index. The least stable of the lowest 4 model sigma layers is lifted to produce the value. This type of calculation is useful in cases in which elevated layers above the surface are less stable than the surface.

The AFOS graphics LI for the Eta model, on the other hand, is a calculation based on the lowest Eta layer, a carry-over from the LFM which gave a surface-based LI. This field, therefore, will not reflect elevated instability. Cases in which a layer just above the surface is very unstable while the surface air remains stable will show a stable LI.

Three different LIs are currently computed by the post-processor of the Eta model. They differ in the depth of the layer that is being lifted. The computations are listed in a GEMPAK (N-AWIPS) "gdinfo" listing as:

PARM     LEVL1       LEVL2       VCORD
     
LIFT        30           0        PDLY 
LFT4       180           0        PDLY 
LIFT       500        1000        PRES
     

The first LI calculation lifts the lowest, post-processed, 30 mb thick, and terrain-following "boundary layer". The second lifts each of the lowest six 30 mb deep "boundary layers" and takes the "best" LI. The third value is the LI based on lifting the lowest Eta layer.

The post-processor of the NGM computes 2 LIs:

PARM       LEVL1      LEVL2       VCORD
     
LFT4        8400       9800        SGMA 
LIFT         500          0        PRES
     
Here, the first computation lifts the lowest 4 NGM sigma levels and takes the "best" value, while the second one lifts only the lowest sigma layer.

Specifically, users of PCGRIDDS using the front end file can obtain the "best" LI from the post-processed grids. It is listed as:

PARAM     LEVEL
     
LIFT       0000
     

To compute an LI analogous to the boundary layer value, the following function in '93 PCGRIDDS must be used:

LNDX LVL1 LVL2

For the Eta model, the proper values for the last two terms are B015 and 500, while they are S982 and 500 when using the NGM.

Users of PCGRIDDS using the OSO file will find two different lifted index products:

PARAM       LEVEL
     
LIFT         0000
LIFX         0000
     
The LIFT is the "best" lifted index, while the LIFX is based on the lowest sigma layer for the NGM and the lowest Eta level for the Eta. The same function used to compute a low-level LI already discussed for the front end file also works for the OSO files.

Finally, whereas the AFOS graphics depict differently calculated LI fields, the values included in FOUS messages from the NGM and Eta model runs both reflect the "best" lifted index from the respective models.

Back to Table of Contents

HOW ARE 10M WINDS COMPUTED?

10 meter wind is computed using the Eta Model's first atmospheric layer wind (which is defined at the mid-point of the first layer) and the momentum flux between the ground (skin) and that mid-point. The assumption that the flux is constant across this interval allows us to solve for a wind anywhere in the interval and we solve for a wind at anemometer level or 10 meters (above MODEL terrain).

Back to Table of Contents

WHY IS THE SURFACE PRESSURE GIVEN IN THE BUFR OUTPUT DIFFERENT FROM WHAT IS OBSERVED?

1) It is the pressure at the Eta Model's surface (e.g. skin T level, whereas the other SFC stuff is at 2m and sfc winds are at 10m ABOVE Eta Model GROUND LEVEL. That elevation is undoubtedly higher than the elevation of the ASOS station (by about 85-95 m I'd guess based on the 13mb difference). You must take this into account when comparing model surface stuff against observations.

This difference is also reflected in and needs to be accounted for when using the FOUS stuff that's generated from the Eta and NGM, except that the situation is worse there because of all the extra interpolations that are done to generate FOUS (YUK). The BUFR soundings are definitely the way to go because there is NO interpolation involved, the Eta profile from the nearest Eta step is output directly.

We have had a few spectacular successes of making a slight change in specified location for the BUFR site and getting a more representative Eta step profile. This happens normally when there is a strong gradient nearby (e.g. Salt Lake City and Buffalo) so that a small change in distance puts you onto a lower step.

2) For BUFR or FOUS, the interpretation is the same, i.e. PSFC is the model's surface pressure at the model's terrain level (SELV?). Each model Meso Eta, early Eta, NGM, RUC etc., each have their own terrain and they can be vastly different. There is a FOUS TPB that has a Table of the NGM elevations, but the Table for the early Eta is out of date since we went to the 48km grid and will be even more out of date when we (this year) go to 32km (45 levels) in all 4 Eta runs.

Back to Table of Contents

General

WHAT IS ON THE OSO SERVER?

Link to the OSO and NIC Info Page


FILENAME                      Source    Content
----------------              ------    ------------
            yymmddhh
us008_gf083_96041612_Yxmnx     NCEP
        083                             ETA FCST model # - 80 km resolution
        085                             Meso-ETA FCST model # 30 km resolution
        089                             ETA forecast model # - 48 km resolution
                               
                     Y                  domestic GRIB designator
                     Z                  domestic GRIB designator
                       m=N              207/Regional Alaska data
                       m=Q              211/Regional CONUS
                       m=R              212/Regional CONUS
                       m=U              215/Regional CONUS

                     Y                  domestic GRIB designator
                        n=A             00 forecast hour (for 'Y' Grib only)
                        n=B             06
                        n=C             12
                        n=D             18
                        n=E             24
                        n=F             30
                        n=G             36
                        n=H             42
                        n=I             48 forecast hour

                     Z                  domestic GRIB designator
                        n=B             03 forecast hour (for 'Z' Grib only)
                        n=E             09
                        n=H             15
                        n=K             21
                        n=L             27
                        n=O             33 forecast hour
Contact: Dan Starosta 301 713 0877

In addition to this, the GRIB documentation discusses the WMO Header which is used to make up the filenames on the server. This incomplete discussion is in Appendix A of ON 388 and could help to determine what the other model files are.

Back to Table of Contents

HOW ARE MODEL VERTICAL VELOCITIES COMPUTED?

All of NCEP's models use their 'vertical momentum' equation for ultimately getting vertical velocity. There is no kinematic approach used so there is no need to apply an O'Brien correction. The third equation of motion involves the material derivative of the vertical coordinate, i.e. d(sigma or eta)/dt - what is written quite often as sigma-dot or eta-dot. Through the hydrostatic assumption, this material derivative equation is reduced to the model's continuity equation involving integrated mass-flux divergence and the surface pressure tendency. The values of sigma-dot or eta-dot get converted to a conventional vertical velocity in the models' post-processor.

Back to Table of Contents

HOW CAN WE IMPROVE OBSERVATIONS AND ANALYSES OF MOISTURE TO IMPROVE QPF?

1) We have lots of room for improvement of the models and their accuracy in simulation atmospheric behavior. These things will keep us (I hope) employed indefinitely. We have lots of room for improvement of model resolution. This will come with more powerful computers. There is lots of room for improvement of initial conditions (which is where I sense you are coming from) BOTH in technique and in observational basis. Improvements (better forecasts) will be limited if we can't get better obs, but the other things will each generate increments of improvement in forecast accuracy. Depending on where in the forecast you look, however, that increment will be different. If we can't get better obs, then analysis and assimilation and model improvements will be limited by the initial error (though that error may grow more slowly). You may not think very much of our Eta Model QPF's but we have come a LONG way since June of 1993. We've come this far because of progress in ALL of the above areas. So, while I agree that low-level moisture is critical, I think progress is possible even without it. With Four Dimensional Data Assimilation, EVERYTHING contributes to improved QPF, not just the low level moisture. I'm not advocating any action on this and was adamant about NOT reducing RAOBs (not that anyone listens to me).

2) I'm hopeful on several fronts. I don't think the network of RAOBs will be decimated TOO BADLY. We need more than just those 12-hour obs anyway, but they are a great anchor for low level moisture. A water-vapor sensor is now being tested on a couple of UPS aircraft. Potentially, if deployed on our domestic carriers, we could be getting a whole bunch of ACARS moisture data from aircraft on every ascent / descent. GPS ground receivers can infer column precipitable water, and SSM/I and GOES provide this type of information now. The trade off is relatively high temporal frequency versus NO vertical resolution and not particularly high horizontal resolution. New Variational techniques give us the ability to get maximum benefit from these types of data. Direct use of brightness temperatures (radiances) with 3D-VAR eliminates the errors in retrieval methods that, for moisture, rapidly swamp the signal. There is a Radio-Acoustic Sounder that provides low level thermodynamic info and can be added to the Profilers, but I don't expect a lot of these for quite some time and they are NOISY. Currently we make little use of cloud and precipitation observations in the EDAS, BUT that is ABOUT to change. While this information may not be specific to low-level content, it is still quite useful in correcting the structure of the complete moisture/condensate field in general. Finally, (I may have left something out), detailed information from the WSR-88D (in my opinion, the ONLY mesoscale observing system we have) will be used in our 3D and 4D-VAR assimilations and that information will be even better/more useful if they go with a polaraized strategy in the future. We already use radial velocity in our 10km 3D-VAR analysis for the Nest-in-the-west (also used it for Olympics run). Reflectivity and VIL will come next.

Back to Table of Contents

WHAT'S THE SNOW UPDATE PROCEDURE IN THE NAM AND GFS? (updated 31 Jan 12)

Snow in the NAM and GFS is updated once per day using analysis data from the National Ice Center's Interactive Multisensor Snow and Ice Mapping System (IMS) and the Air Force Weather Agency's SNODEP model. The IMS product is a snow cover analysis (yes/no flag) at 4 km resolution. It is northern hemisphere only. The AFWA product is a global physical snow depth analysis at 23 km. Both products are produced daily. The IMS and AFWA datasets are interpolated to the NAM and GFS physics grids using a "budget" interpolation method in order to preserve total water volume. If IMS indicates snow cover, then the model snow depth is set to 5 cm or the AFWA depth, whichever is greater. If IMS indicates no snow cover, the model depth is set to zero regardless of the AFWA depth. The IMS data is used as a 'check' on the AFWA data because it has more accurate coverage, especially in mountain ranges (because of its higher resolution). For GFS points in the southern hemisphere, the AFWA depth is used as is. Both models' prognostic equations are written in terms of snow liquid equivalent. So the analyzed depth is converted using a 5:1 ratio for NAM and a 10:1 ratio for GFS. The NAM is updated at the T-minus 12-hour point of the 06Z NDAS cycle. The GFS is updated at the 00Z point of the GFS and GDAS cycles. Snow depth is a prognostic field, so between update times the models will accumulate and melt it.

Back to Table of Contents

From SREF guidance on AWIPS, I see two consecutive 3 hr POP's of 3% yet the 6 hr POP for the identical period is 30%. How could this be?

Back to Table of Contents

What is the implication of the 2m Td cap fix in the new SREF_v7.0?

Back to Table of Contents