NOAA Privacy Policy | NWS Disclaimer
N.O.A.A. logo HWRF banner image National Weather Service logo

November 19, 2009 Meeting Summary

Young Kwon gave a presentation titled: "Sensitivity of Air-Sea Exchange Coefficients (Cd and Ch) on Hurricane Size and Intensity: Part VI - Test Results Using GFS Phase I Data". Young first showed a plot of Cd profiles versus wind speed for operational HWRF (in red), Cd from the 2003 Powell paper (in maroon), and Cd from the 2007 Powell paper (in black). From the plot of Ch values versus wind speed, Young used the CBLAST Ch values (in maroon) for experiments shown in this presentation coupled with Powell's 2003 Cd values. HWRF runs using the 2007 Powell Cd value and CBLAST Ch value are ongoing and would be presented later. Young then presented a plot of Ch/Cd for operational HWRF (in red), CBLAST Ch/Powell '03 Cd (in maroon), and CBLAST Ch/Powell '07 Cd (in blue). The operational Ch/Cd profile is always greater than one, while Ch/Cd profiles using Powell's Cd values decrease to less than 0.8, which should produce a less intense storm.

Next Young gave an overview of the experiments he conducted for this work. As previously mentioned, he used the 2003 Powell Cd value and the Ch value from the CBLAST paper by Jun Zhang. Four storms were run from the 2008 Atlantic hurricane season: 33 runs of Fay, 30 runs of Gustav, 39 runs of Hanna, and 52 runs of Ike for a total of 154 runs. Two models were compared: H48N, which is the operational HWRF with the new GFS (represented by blue in the following plots) and H5_5 which is H48N with new surface physics (represented by red in the following plots). Young then presented track and intensity error for all four storms combined. The track error for H48N and H5_5 were almost the same but H5_5 reduced the intensity error compared to H48N by approximately 22% from 12h onward. The intensity bias was reduced by almost 1/2 for the H5_5 experiments while the standard deviation for H5_5 was less than H48N throughout. Young mentioned that the standard deviation results were promising because they indicated a more consistent intensity forecast. From the number of superior performance plot, H5_5 was superior to H48N at all forecast times but 0 and 36h.

Next Young showed track and intensity error plots for each individual storm, with emphasis on Fay, which had a large negative bias later in the forecast. For Gustav, the track errors for H48N and H5_5 are very similar, while H5_5 saw a reduction in intensity error by approximately 20%. For Hanna H5_5 improved track by about 7% and intensity by about 14%. For Ike, track error for H48N and H5_5 were almost the same while intensity error was improved by H5_5 by about 35%, which was the largest intensity error reduction. As Young pointed out, since the tracks for H48N and H5_5 were very similar, this improvement in intensity was not due to a change in track. Finally, for Fay, the track errors for H48N and H5_5 were again very similar while intensity error was improved by H5_5 by about 24%. The intensity bias for Fay showed an increased negative bias from H5_5 later in the forecast period when Fay was over land. The standard deviation for H5_5 is smaller than that for H48N for most forecast hours, especially 72h and later.

Young concluded his presentation with his future plan. This included performing the experiments shown here using the 2007 Powell Cd value and the CBLAST Ch value. Young also plans to run these experiments using the new vortex initialization provided by Qingfu Liu. For 2010, Young's work will focus more on PBL parameterization.

Next, Sam Trahan presented his findings on the HFIP HWR4 (hi-resolution HWRF) run on Vapor. Setup for HWR4, which was based off of H209, included a 13.5 km outer domain and 4.5 km inner domain resolution with gravity wave drag (GWD) turned on. HWR4 was also coupled to POM at the operational resolution and used an 18s outer domain timestep instead of the proposed 27s timestep to prevent an unstable model. 6-hour cycling was also used, and the model required 4 hours for initialization and 8-9 hours to run a forecast. This amounted to 1600-1900 CPU hours/run. The HWR4 had a small sample set of storms, mostly from Bill (03L) and Fred (07L), since the 2009 Atlantic hurricane season was rather slow. Sam then mentioned that based on his findings, the track prediction for HWR4 exhibited very high errors and the intensity prediction had large positive biases.

Sam then presented track and intensity error plots for operational HWRF (in blue), HWR4 (in red), H209 (in green), and operational GFDL (in purple). He noted that for his comparison, only cycles that were run by all four models were plotted and error bars are at +/- one standard deviation. The track error for HWR4 is approximately 2-3 times more than operational track error, but the intensity error for HWR4, while higher than for the other models, is more reasonable. HWR4 also has the highest intensity bias values with about 20kt positive bias at 48 and 72h.

Next Sam detailed some of the issues with running HWR4 on Vapor. Since Vapor has a fairshare policy, only 7680 CPU hours/day were allowed for HWR4 runs. This amounted to about 4.2 HWR4 runs possible per day with no room for re-runs. The HWR4 runs, at 8-9 hours per 126h forecast, were very close to the 9hr wallclock limit on the server. Sam also experienced filesystem issues resulting in 100-300 times slower running capability for several hours at a time almost every week. Network issues on Vapor caused a 1.5-2.5 times slowdown which required a restart of the model on different nodes. In the end, about 1/3 of the cycles had to be re-run. Other issues included small amounts of forecast data on Vapor and out of date nwprod executables. Sam also mentioned that GSI was not used for HWR4 runs b/c the GSI executables were out of date. Other issues include that GWD may not be necessary for use with HWR4 and POM should be run at twice its current resolution with HWR4. Of the 95 cycles of HWR4 that were run, Sam noted that there were 20 cold starts, 50 HISTORY runs (instead of real-time runs), there were 45 forecast runs, and 61 runs had ocean initialization failure.

Then Sam mentioned some issues involving nest movement implementation. Currently, the nest's non-hydrostatic state is discarded after the nest moves. Failure to do this leads to non-physical waves from the terrain change in the leading portion of the nest. For Sam's 9km outer and 3km inner domain runs using a 12s timestep, sudden bursts of convection were seen after each nest move. These perturbations seemed to mostly disappear after about 1 minute and were gone after 3 minutes. The minimum pressure field shows a return to the hydrostatic state as the min. pressure dips very low and then oscillates before readjusting illustrating Sam's findings. Sam also pinpointed some convection issues with HWR4. 200 mb cloud tops are used in the model while 100 mb cloud tops are usually seen in actual hurricanes. This 200mb cap could be due to the use of a SAS scheme for convection. More investigation into a SAS precipitation versus grid scale precipitation is required.

To conclude Sam described the fixes already in place on Vapor to make running HWR4 easier. The GFS input data has been ported to Vapor and updated GFDL executables are being used. Sam also created an auto-submission script which would detect cold starts by the model, put output from completed runs on HPSS, launch the next cycle of HWRF once the previous cycle's track was produced, delete old data, and e-mail the user results. Sam has also worked to speed up HWR4, which would be necessary if it is to be used in operations, by looking into issues associated with MPI communication between machines.

Please e-mail comments, questions, or suggestions about the contents of this webpage to Janna O'Connor, at

Home | HWRF Main Page
EMC | NCEP | National Weather Service | NOAA | Department of Commerce