EMC: Mesoscale Model/Analysis Systems FAQ

Table of Contents

Introduction

This is the EMC Mesoscale Model/Analysis Systems FAQ.  We have collected detailed answers to various questions over 
the past several years and they are presented here under general subject headings.  Please remember, this is a dynamic 
document.  We are trying to eliminate outdated items, but you can expect this to be a slow process.  We are putting 
dates on all new sections so you can see when they were last updated.  To see when important changes were made in 
operations which may have rendered an FAQ item obsolete, we recommend you check the 
 Mesoscale Model/Analysis Systems Change log.

Back to Table of Contents

WHY ARE THE 2-M TEMPERATURE/DEW POINT AND 10-M WIND ALL ZERO AT 00-H FOR THE NAM NESTED DOMAINS? (18 Oct 11)


To interpolate the NDAS first guess to each nests' domain, we use the NEMS Preprocessing System (NPS) codes.  
These codes only interpolate prognostic fields that are used to initialize the model integration 
(T, wind, Q on model levels, soil parameters, surface pressure, etc). 2-m T, 2-m Td, and 10-m wind are diagnostic 
quantities that are not needed to start a model integration, so NPS does not process them. Hence, these fields 
are undefined at 00-h in the nests,as well be many other diagnostic fields. At 1-h and beyond these shelter fields 
will be written out since the model has started integrating. 

One may ask why you see valid 00-h 2-m T/Td and 10-m wind in the 12 km parent NAM? In previous versions of the operational 
NAM, what you see at 00-h is NOT an analysis of the 2-m T and 10-m wind at the valid time. It is the last 
NDAS forecast of these fields that is passed through the NAM analysis in the full model restart file, and posted at 00-h. 
With the October 2011 NAM implementation, this has been changed for the 12 km domain. The analysis will now 
analyze 2-m T/q and 10-m wind if there are valid first guess values of these fields from a full model restart file 
written out by the NDAS forecast. Because the nests' first guess are created by NPS (which doesn't process 
shelter fields) the GSI analysis for the NAM nests will not do a 2-m T/Td and 10-m wind analysis. 

It should be noted that if for any reason the NDAS did not run and the 12 km NAM had to be initialized off the GDAS, 
its 00-h 2-m T/Td and 10-m wind would be zero as well.

13 April 2017 update : The 21 March 2017 NAM upgrade 
includes the replacement of the 12-h NDAS with an 6-h data assimilation cycle with hourly analysis updates for the
12 km parent domain and the 3 km CONUS and Alaska nests. Also added wasthe running of a diabatic digital filter
prior to every forecast run (both the 1-h forecast during the data assimilation and the 84-h NAM forecast). These two changes
now allow for the NAM nests 2-m T/Td and 10-m wind to have realistic values at 00-h

Back to Table of Contents

WHY IS RADAR ECHO TOP HEIGHT SOMETIMES NOT OUTPUT FROM THE NAM FIRE WEATHER NEST? (26 Oct 11)


Radar echo top height in the NAM parent and nests is computed at a given grid point if the simulated reflectivity 
is 18.3 dBz or higher. If radar echo top height is undefined (due to lack of simulated reflectivity above 18.3 dBz) 
the NAM post-processor will mask out the point with a bit map instead of setting the field to some fixed value
Given the small size of the fire weather nest domain, it is possible that you could get no or very few simulated 
echoes for some forecast hours that are above 18.3 dbZ that would trigger the calculation of radar echo top, and  
therefore the entire radar echo top height field would be undefined. When this happens, the NAM post-processor will 
not output the field at all.

Back to Table of Contents

WHY IS THE QPF DIFFERENT BETWEEN THE 12-KM NAM PARENT DOMAIN AND THE NAM CONUS NEST? (4 Nov 2011, updated 13 April 2017)


The NAM 12km parent uses the Betts-Miller-Janjic (BMJ) parameterized convection scheme (Janjic, 1990 MWR, 1994 MWR). 
The NAM nests use a highly modified version of the BMJ scheme ("BMJ_DEV") which has moister profiles and is set to have reduced 
convective triggering, which leaves majority of the precipitation 'work' to the grid-scale microphysics (Ferrier). 
These settings for the nests in the BMJ_DEV scheme gave better QPF bias than running with explicit convection, and better
forecast scores against upper-air and surface observations. So you should expect to see different QPF fields between 
the 12km NAM parent and the NAM nests.

13 April 2017 update:
In the 12 August 2014
NAM upgrade, the "BMJ_DEV" scheme was turned off the all NAM nests except the 6 km Alaska nest. 
In the 21 March 2017
NAM upgrade, the horizontal resolution of the NAM CONUS and Alaska nests was increased to
3 km, and extensive model physics and data assimilation changes were made that improved the NAM nests' precipitation
forecasts.

Back to Table of Contents

GRIB2 LIBRARY FIX NEEDED TO RUN WRF MODEL OFF THE NEW NAMv4 GRIDS (4 May 2017)


One of the post-processing changes made as part of the 21 March 2017 NAMv4 upgrade was to create GRIB2 
output grids direct from the post-processing codes, rather than generate GRIB1 files and convert them 
to GRIB2 with JPEG2000 compression. Generally, direct generation of JPEG2000 GRIB2 files from the 
post-processing is slow and when tested in the NAM caused unacceptable delays in gridded product 
generation and delivery, especially for the larger domains (12 km NAM parent and 3 km CONUS/Alaska nests). 
For this reason, the GRIB2 compression type for each NAM domain in the v4 upgrade was set as:

JPEG2000: Hawaii, Puerto Rico and Fire Weather nests
Complex packing with 2nd order spatial differencing: CONUS nest, Alaska nest and all output from 12km NAM parent domain

Unfortuately there is a known bug in earlier versions of NCEP's GRIB2 (g2) Fortran library that occassionaly 
causes decoding errors when reading GRIB2 files with complex packing. After the NAM implementation on 21 March 2017, 
several groups running the WRF code using NAM input reported failures in the WRF Preprocessing codes (WPS) due to 
the complex packing GRIB2 deconding error described above. To alleviate this problem, a modified g2 library routine 
that fixes the decoding error was provided to the WPS code manager at UCAR, and a patch was released for WRF v3.8.1 
which can be downloaded here.

Users can access all of NCEP's production libraries at 

www.nco.ncep.noaa.gov/pmb/codes/nwprod/lib/

For the Fortran g2 library, v2.5.2 is the first version that has the GRIB2 decoding fix described above, v3.1.0 
is the latest version with fixes for all known problems as of this date. 

Back to Table of Contents

WHAT IS THE DIFFERENCE BETWEEN THE NAM CONUS NEST AND THE CONUS NMMB HIRESW RUN? (4 Dec 14)


The NAM nests (4 km CONUS, 6 km Alaska, 3 km Hawaii/Puerto Rico) run simultaneously as one-way nests inside the NAM-12 km parent 
to 60-h, and are thus available at the same time as the 12 km NAM. The 1.33 km fire weather nest is a one-way nest inside the 
4 km CONUS nest, running to 36-h. These nests get their boundary conditions updated from the parent every time step. The 
nests are initialized from the NAM Data Assimilation System (NDAS) first guess just as the 12 km NAM is initialized. The NAM 
12 km run uses the previous GFS for lateral boundary conditions.

The NAM downscaled grids that are distributed are from the NAM nests from 0-60 h and from the NAM 12 km parent from 63-84 h.

The High-Resolution Window Forecasts (HIRESW) are stand-alone runs of the NEMS-NMMB and the WRF-ARW at 3-4 km resolution. They 
run after the GFS, so they use the current cycle GFS for initial and boundary conditions, except for the CONUS runs which uses 
the Rapid Refresh (RAP) analysis for initial conditions. We run five HIRESW domains, two large domains (CONUS, Alaska) and 
three small domains (Hawaii, Puerto Rico, and Guam) on this schedule.

0000Z : CONUS, Hawaii, Guam
0600Z : Alaska, Puerto Rico
1200Z : CONUS, Hawaii, Guam
1800Z : Alaska, Puerto Rico

On the NCEP  NCEP Model Analysis and Guidance page,
the NAM nests are called "NAM-HIRES", the HIRESW runs are called "HRW-NMM" or "HRW-ARW"

More details on the differences between the NAM CONUS nest and the CONUS NMMB HiResW run are described in this table.

13 April 2017 update: As of 21 March 2017, the horizontal resolution of the NAM CONUS and Alaska nests was increased to 
3 km. Details on the NAM upgrade that included these resolution changes can be found 
here.

Back to Table of Contents

HOW DOES THE LAND-SURFACE MODEL IN THE NAM COMPUTE ACCUMULATED SNOW? (15 Nov 11)


In the NAM, the Noah land model makes a binary choice as to whether falling precipitation is liquid or frozen based 
on the array "SR" (snow ratio) from the NAM microphysics. If SR>0.5, it's frozen precip (i.e. snow), otherwise 
it's rain.  

The density of the new snow as it hits the surface is dependent on the air temperature.  Snow on the ground 
then accumulates and compacts.  If snow is already present (on the ground), the new plus old snow density 
is "homogenized" into a single snowpack depth with uniform density.

Snow is initialized in the model once per day at the start the 06Z NDAS from the National Ice Center snow cover 
analysis and the AFWA snow depth analysis, with assumed initial snow density of 5:1 (snow depth-to-snow 
water equivalent). 

Snow melts if the surface (skin) temperature irises to 0C, based on the surface energy budget (net radiation, 
sensible and latent heat fluxes, soil heat flux), though for partially snow covered model gridbox (<100%), 
some of the surface heating will go into skin temperature increase, surface sensible and latent heat flux, 
and ground heat flux for the non-snow-covered portion of a grid box, so the surface temperature can exceed 
0C during snow melt, and thus low-level (e.g. 2-m) air temperature may exceed 0C by several degrees. 

Back to Table of Contents

WHAT ARE THE UNITS OF CLOUD ICE FIELD COMING OUT OF NAM? (13 April 2012)


The GRIB units for the cloud ice field are *incorrect*.  A sample 
inventory of the CONUS nest field shown here, you'll see many references to the 'CICE - Cloud Ice [kg/m^2]' field 
at various levels in the vertical and forecast ranges, *but the actual units are kg/kg*.  For example: 
"123 100 mb CICE 6 hour fcst Cloud Ice [kg/m^2]" those units should be kg/kg.

This error has been a source of confusion in the past, and it's been around a long time.  Why hasn't it been fixed?  
Well changing things in GRIB (a WMO standard) takes a while, but the main reason is inertia and we don't want to 
blow existing users out of the water. his is especially true for things that have been around for a long time and  
probably have users who have adapted their applications & their forecasts to use the field in its present form. 
Changing/fixing this means every one of those users has to make a change to accommodate the change.  
Long lead-time and widespread advance notice is required. 

Note, the column-integrated cloud ice (from the link above) is listed as: 
"541 entire atmosphere (considered as a single layer) TCOLI 6 hour fcst Total Column-Integrated Cloud Ice [kg/m^2]:
These units are correct.

Back to Table of Contents

HOW DOES THE NAM AND GFS INITIALIZE TROPICAL STORM LOCATION? (25 May 2012, updated 13 April 2017)


Step 1: Tropical storm relocation

A job pulls in the tropical cyclone bulletins (or "tcvitals") for
for the current cycle from the JTWC, FNMOC, and NCEP/NHC.
These bulletins are then merged, quality controlled, and are
written to the so-called "tcvitals" files. In the GDAS, if the model
representation of the tropical storm can be found in the forecast,
the storm is mechanically relocated in the first guess prior to the 
Global-GSI analysis. If the model storm can not be found, the bogus 
wind observations are used (see step 2). The GDAS first guess with the
relocated storm is also used as background to the t-12 NDAS analysis, 
and it is also used to improve the observation quality control processing 
in the NDAS/NAM and GDAS/GFS. 

Step 2: Synthetic tropical cyclone data

In the NAM/NDAS (for all storms) and in the GFS/GDAS (for weak-intensity 
storms not found in the GDAS forecast) the tcvitals file created in 
Step 1 is used to create synthetic (bogus) wind profile reports throughout
the depth of each storm. In addition, in the NAM/NDAS (for all storms) and
in the GFS/GDAS (for weak-intensity storms not found in the GDAS forecast), 
all mass observations sufficiently "close" to each storm in the tcvitals file
(i.e., within the lat/lon boundary for which wind bogus reports are generated) 
are flagged for non-use by the NAM-GSI analysis and Global-GSI analysis. Also, 
in the NAM/NDAS and GFS/GDAS, dropwindsonde wind data sufficiently "close" to
each storm in the tcvitals file (i.e., within a distance to the storm center of
111 km or three times the radius of maximum surface wind, whichever is larger)
are flagged for non-use by the NAM-GSI analysis and Global-GSI analysis

13 April 2017 update : With the 21 March 2017 NAM upgrade, the tropical cyclone
relocation algothim used in the GFS is being used in the NAM for the 12 km North
American domain only. The relocation is performed on the GDAS first guess used to 
start every NAM 6-h data assimilation (DA) cycle because the relocated GDAS first guess
is not yet available) and on the NAM first guess forecast at the end of the 6-h 
DA cycle. For details go to slides 49-51 in this
 overview 
of the NAM upgrade.  With this upgrade the synthetic tropical cyclone data is no longer
used in the NAM

Step 3: Sea level pressure obs in tcvitals file

The Global-GSI analysis reads in the estimated minimum sea-level pressure for each storm
in the tcvitals file created in Step 1 and assimilates this pressure. 

October 2019 update:

1) Tropical cyclone relocation was turned off in the GFS in June 2019 with the FV3GFS implementation.

2) Tropical cyclone relocation in the NAM was never invoked. In May 2017, a production change on the NCEP
supercomputer occurred that stopped updating the tropical cyclone storm listing file that the NAM used for 
relocation. This was not discovered until the summer of 2018. Since the relocation was turned off in the GFS,
the software package for it is no longer supported by EMC, so it was decided to not turn it back on in the 
NAM.

WHY DOES THE 00Z/12Z NAM MODEL RUN HAVE 12-H QPF BUCKETS AND THE 06Z/18Z NAM MODEL RUN HAVE 3-H QPF BUCKETS? (30 Nov 12)


The 00Z and 12Z NAM run was first called the "Early" Eta because it replaced the Limited Fine Mesh Model (LFM)
in the NCEP production suite in 1993. For this reason it had to create LFM lookalike files, which had 12-h QPF buckets.
For this reason the 00Z/12Z NAM model integration has had 12-h QPF buckets. 3-h QPF buckets have been added to
selected grids (such as the 40 km, 20 km, and 12 km AWIPS grids) during the post-processing of output grids.

The 06Z and 18Z NAM runs are the descendants of the old 29 km Meso Eta, which ran from 03Z and 15Z out to 33-h 
in 1995. It was initialized with 3-h assimilation spinup from 00z/12z. Because of the 3 hour offset in the starting 
time of the Meso Eta, it had to have 3-h QPF buckets. 

In 1995 the Early Eta was upgraded to from 80 to 48 km, and a 12-h assimilation cycle was added to initialize 
the 00z/12z 48 km Eta forecast. So from Oct 95-Feb 97 NCEP ran a 00Z/12Z 48 km Eta and 03Z/15Z 29 km Eta, which 
were distinct model runs with no connection between the two. A decision was then made to unify the two Eta runs,
so in February 1997 the 32 km Eta was implemented with four forecasts per day at 00z, 03z, 12z, and 18z, all 
from the Eta (now NAM) Data Assimilation System. At that time NCEP had to run a 03Z Eta-32 instead of an 06Z Eta-32
initially because of conflicts with the Medium-Range Forecast (MRF). When NCEP acquired its first IBM  
supercomputer in 2000, the 03Z eta-32 was moved to 06Z. However, because of the development history  
of the 00z/12z and 06z/18z NAM runs, we have to maintain the different QPF bucket lengths for the
foreseeable future.

See this web link for the history of NAM 
and other NCEP Mesoscale model evolution (with links to documentation) since 1993.

Back to Table of Contents

WHAT ARE THE EQUATIONS FOR MOVING BETWEEN GEOGRAPHIC AND ROTATED LAT/LON ON THE NMMB's B-grid?

(This answer applies to all non-WRF NMM and all Eta Model E-grids too.)
Let all longitude be reckoned positive east.
Let lat_g and lon_g be the geographic lat/lon and lat_r and lon_r be the rotated lat/lon.
Let lat_0 and lon_0 be the geographic lat/lon of the E-grid's central point.  
This is where the grid's rotated equator and rotated prime meridian cross.

First find the rotated lat/lon of any point on the grid for which the geographic lat/lon is known.
Let X = cos(lat_0) cos(lat_e) cos(lon_e - lon_0) + sin(lat_0) sin(lat_e)
Let Y = cos(lat_e) sin(lon_e - lon_0)
Let Z = - sin(lat_0) cos(lat_e) cos(lon_e - lon_0) + cos(lat_0) sin(lat_e)
Then lat_r = atan [ Z / sqrt(X**2 + Y**2) ]
And lon_r = atan [ Y / X ]  (if X < 0, add pi radians)

Now find the geographic lat/lon of any point for which the rotated lat/lon are known.
lat_e = asin [ sin(lat_r) cos(lat_0) + cos(lat_r) sin(lat_0) cos(lon_r) ]
lon_e = lon_0 ± acos [ cos(lat_r) cos(lon_r) / ( cos(lat_e) cos(lat_0) ) - tan(lat_e) tan(lat_0) ]
In the preceding eqn, use the "+" sign for lon_r > 0 and use the "-" sign for lon_r < 0. 

Back to Table of Contents

Post Processing

WHAT SIMULATED RADAR REFLECTIVITY PRODUCTS ARE PRODUCED BY THE NAM? (12 July 06)


_*Radar reflectivity products from the NAM model *_

At a given forecast time, three-dimensional radar reflectivities are
calculated at the native model resolution (vertical and horizontal) from
the algorithm described below using as input the three-dimensional
mixing ratios of rain and precipitation ice (including variable ice
densities to distinguish between snow, graupel, and sleet), and the
two-dimensional convective surface precipitation rates.   The following
two-dimensional radar reflectivity products are derived from the
three-dimensional forecast reflectivities:

   1. Lowest level reflectivity -  calculated at the lowest model level
      above the ground.
   2. Composite reflectivity - maximum anywhere within the atmospheric
      column.
   3. 1 km and 4 km AGL reflectivity - interpolated to heights of 1 km
      and 4 km, respectively, above the ground.


Back to Table of Contents

HOW ARE THE SIMULATED REFLECTIVITIES COMPUTED? (12 July 06)


*_Algorithm used to calculate forecast radar reflectivities from the NAM model_*

Simulated radar reflectivity from the operational NAM is in units of dBZ
and it is calculated from the following sequence.

  1. dBZ = 10*LOG10(Ze), where Ze is the equivalent radar reflectivity
     factor.  It is derived from the sixth moment of the size
     distribution for precipitation particles, and it assumes that all
     of the particles are liquid water drops (rain).
  2. Ze = (Ze)rain + (Ze)ice + (Ze)conv. The equivalent radar
     reflectivity factor is the sum of the radar backscatter from rain
     [(Ze)rain], from precipitation-sized ice particles [(Ze)ice], and
     from parameterized convection [(Ze)conv].
  3. (Ze)rain is calculated as the sixth moment of the rain drop size
     distribution.  It is the integral of N(D)*D**6 over all drop
     sizes, where N(D) is the size distribution of rain drops as a
     function of their diameter (D).
  4. (Ze)ice is calculated as the sixth moment of the particle size
     distributions for ice, but with several correction factors.  The
     first is accounting for the reduced backscatter from ice particles
     compared to liquid drops.  Because equivalent radar reflectivity
     assumes that the precipitation is in the form of rain, this
     correction factor is 0.189 (the ratio of the dielectric factor of
     ice divided by that of liquid water).  The second correction
     factor accounts for deviations in ice particle densities from that
     of solid ice, in which the radar backscatter from a large,
     low-density irregular ice particle (e.g., a fluffy aggregate) is
     the same as that from a solid ice sphere of the same mass (e.g., a
     small sleet pellet).
  5. (Ze)conv is the radar backscatter from parameterized subgrid-scale
     cumulus convection.  The algorithm that is employed in the NAM
     assumes the radar reflectivity at the surface is
     (Ze)conv=300*(Rconv)**1.4, where Rconv is the surface rain rate
     (mm/h) derived from the cumulus parameterization.  This so-called
     Z-R relationship is based on the original WSR-88D algorithm.  The
     radar reflectivity is assumed to remain constant with height from
     the surface up to the lowest freezing level, and then it is
     assumed to decrease by 20 dBZ from the freezing level to the top
     of the parameterized convective cloud.


For items 3 and 4 above, the simulated equivalent radar reflectivity
factor (Ze) is calculated from the sixth moment of the size distribution
for rain and ice particles assumed in the microphysics scheme.  Radar
reflectivities calculated from numerical models are highly sensitive to
the size distributions of rain and precipitating ice particles,
particularly since the radar backscatter is proportional to the sixth
power of the particle sizes.  In addition, the mixture of water and ice
within a wet ice particle (e.g., during melting) can have a large influence
on radar backscatter, and this effect is currently ignored in the forecast
reflectivity products.  The contribution from parameterized convection is
highly parameterized and is likely to provide the largest sources of error
to the forecast reflectivities.  For strong local storms, which tend to
occur with greater frequency during the warm season, forecast reflectivites
from the NAM are expected to be less intense and cover larger areas than
the radar observations.

Back to Table of Contents

WHY CAN'T I FIND THE RADAR PRODUCTS IN THE NAM AWIP12 GRIB OUTPUT FILE? (14 July 06)

Assuming you are using the nam.tHHz.awip12FF.tm00 files at
ftp://ftpprd.ncep.noaa.gov/pub/data/nccf/com/nam/prod/nam.YYYYMMDD/ where HH is the cycle time = 00, 06, 12 or 18, FF is the forecast hour =
00, 01, 02 ... 24 and YYYYMMDD is year, month & day; you are probably
not seeing them because the new radar reflectivity fields are defined in
NCEP GRIB parameter table version #129
(http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html#TABLE129), and
you are expecting them to be defined in the normal/default Table #2 (http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html).  We had to
put them in this alternative table because Table 2 had no room for new
parameters to be added.  According to the GRIB documentation (see http://www.nco.ncep.noaa.gov/pmb/docs/on388/section1.html) you can find
which Parameter Table is being used in octet 4 of the GRIB PDS (Product
Definition Section).

Looking at an inventory of the nam.t12z.awip1212.tm00 file found in
ftp://ftpprd.ncep.noaa.gov/pub/data/nccf/com/nam/prod/nam.20060713, the
reflectivities are at records 171-174

171:47614082:d=06071312:REFD:kpds5=211:kpds6=109:kpds7=1:TR=0:P1=12:P2=0:TimeU=1:hybrid
lev 1:12hr fcst:NAve=0
172:47975514:d=06071312:REFC:kpds5=212:kpds6=200:kpds7=0:TR=0:P1=12:P2=0:TimeU=1:atmos
col:12hr fcst:NAve=0
173:48336946:d=06071312:REFD:kpds5=211:kpds6=105:kpds7=1000:TR=0:P1=12:P2=0:TimeU=1:1000
m above gnd:12hr fcst:NAve=0
174:48698378:d=06071312:REFD:kpds5=211:kpds6=105:kpds7=4000:TR=0:P1=12:P2=0:TimeU=1:4000
m above gnd:12hr fcst:NAve=0

record 171 = Lowest model level derived model radar reflectivity
record 172 = Composite radar reflectivity
record 173 = 1000 m above ground derived model radar reflectivity
record 174 = 4000 m above ground derived model radar reflectivity

Note the kpds5 entried above and that in Table 129, derived model
reflectivity is variable #211 and composite reflectivity is variable
#212. In Table 2 these would have been upward short wave and long wave
flux, and this is probably what your existing processing assumed they were.

Back to Table of Contents

HOW IS THE SOIL MOISTURE INITIALIZED IN THE NAM? (08/2001, updated 09/2013)

The NAM forecast is initialized with the NAM Data Assimilation System (NDAS). The NDAS runs a sequence of 4 GSI analyses and 3-h NMMB forecasts, starting at t-12h prior to the NAM forecast start time. At the t-12 h start time of the NDAS, we use atmospheric states from the NCEP Global Data Assimilation System (GDAS), while the NDAS soil states are fully cycled from the previous NDAS run (we call this hybrid setup "partial cycling").

The soil moisture and soil temperature fields in the 4-layer (Noah) land-surface model (LSM) used by the operational NDAS/NAM are continuously cycled without soil moisture nudging (e.g. to some climatology). The fields used now in the NAM are the sole product of model physics and internal NDAS surface forcing (e.g. precipitation and surface radiation). During the forecast portion of the NDAS, the model predicted precipitation that would normally be used as the forcing to the Noah scheme is replaced by the hourly merged Stage II/IV precipitation analyses (go here for details on the Stage II/IV analyses). The more timely Stage II (created directly from the hourly radar and gauge data) is used to supplement the Stage IV (regional analyses from the River Forecast Centers, mosaicked for national coverage).

To address with biases in the Stage II/IV analyses, until January 2010 we used the CPC daily gauge analysis to constrcy a long-term 2-d precipitation surplus/deficit (hourly vs. the more accurate daily analysis). This long-term precip budget is used to make adjustments to the hourly analyses (up to +/- 20% of the original hourly values). As of January 2010, the CPC daily gauge analysis was terminated, so in the operational NDAS we are just making adjustments to the Stage II/IV based on the budget that was in place as of that date. In the parallel NDAS, we are using the Climatology-Calibrated Precipitation Analysis (CCPA, Hou et al, 2012 J. Hydro.) to make adjustments to the long-term precip budget. This will be implemented in operations in the next NAM upgrade sometime in 2014.

Back to Table of Contents

HOW DO I GET THE BUFR HOURLY MODEL SOUNDING DATA? (updated 2/27/2017)

The hourly BUFR sounding and surface variables data can be obtained on the NCEP NOMADS server at http://nomads.ncep.noaa.gov/index.shtml. For the NAM, click on the "http" link and then click the "nam.YYYYMMDD" link you wish to access. You will see these directories:

  1. bufr.tHHz (for 12 km NAM parent domain)
  2. bufr.${domain}nest.tHHz
where HH is the cycle time and "domain" is the NAM nest domain (conus, alaska, hawaii, or prico). The files are named bufr.STATID.YYYYMMDDHH where STATID is the 6-digit station id and YYYYMMDDHH is the date and hour of the beginning of the forecast. A station list is available. BUFR station data is also created from the HRRR, RAP, HIRESW, and SREF systems. A guide to unpacking BUFR can be found here , Meteograms from the 12 km NAM BUFR station data can be viewed at the found at the NAM Meteogram web site . Hourly NAM 12km and CONUS nest BUFR soundings are also viewable online.

Back to Table of Contents

General

WHAT'S THE SNOW UPDATE PROCEDURE IN THE NAM AND GFS? (updated 31 Jan 12)

Snow in the NAM and GFS is updated once per day using analysis data from the National Ice Center's Interactive Multisensor Snow and Ice Mapping System (IMS) and the Air Force Weather Agency's SNODEP model. The IMS product is a snow cover analysis (yes/no flag) at 4 km resolution. It is northern hemisphere only. The AFWA product is a global physical snow depth analysis at 23 km. Both products are produced daily. The IMS and AFWA datasets are interpolated to the NAM and GFS physics grids using a "budget" interpolation method in order to preserve total water volume. If IMS indicates snow cover, then the model snow depth is set to 5 cm or the AFWA depth, whichever is greater. If IMS indicates no snow cover, the model depth is set to zero regardless of the AFWA depth. The IMS data is used as a 'check' on the AFWA data because it has more accurate coverage, especially in mountain ranges (because of its higher resolution). For GFS points in the southern hemisphere, the AFWA depth is used as is. Both models' prognostic equations are written in terms of snow liquid equivalent. So the analyzed depth is converted using a 5:1 ratio for NAM and a 10:1 ratio for GFS. The NAM is updated at the T-minus 12-hour point of the 06Z NDAS cycle. The GFS is updated at the 00Z point of the GFS and GDAS cycles. Snow depth is a prognostic field, so between update times the models will accumulate and melt it.

Back to Table of Contents

From SREF guidance on AWIPS, I see two consecutive 3 hr POP's of 3% yet the 6 hr POP for the identical period is 30%. How could this be?

Back to Table of Contents

What is the implication of the 2m Td cap fix in the new SREF_v7.0?

Back to Table of Contents