targpap_page0THE 1999 WINTER STORM RECONNAISSANCE PROGRAM


Zoltan Toth1, Istvan Szunyogh2,
Sharan Majumdar3, Rebecca Morss4, Brian Etherton3, Craig Bishop3, and Stephen Lord
Environmental Modeling Center, NCEP, NWS/NOAA
Washington DC 20233

1. INTRODUCTION

Between January 13 and February 10, 1999 a quasi-operational field program took place over the northeastern Pacific. The aim of the program was to improve 1-4 days lead time weather forecasts of critical weather events by adaptively collecting data in otherwise poorly observed oceanic areas.targpap_Auto8
The critical weather events for which the forecasts were to be improved were selected by forecasters at the Hydrometeorological Prediction Center of NCEP, based on input from NWS field forecast offices. The areas from where adaptively taken extra observations were expected to have the maximum forecast impact were selected by the Environmental Modeling Center of NCEP, using the Ensemble Transform (ET, Bishop and Toth, 1999) method.
In a total of 15 cases one or two aircraft (the NOAA G-lV and/or USAF Reserve C130 planes) were deployed in these "sensitive" regions to collect data by releasing a total of close to 500 dropsondes. The Winter Storm Reconnaissance 1999 (WSR99) program coincided with a research experiment aimed at studying clear air turbulance (SCATCAT, M. Shapiro and C. Girz, personal communication).
The flight missions were often designed to improve weather forecasts for more than one verification region, at different lead times. In 14 cases the West coast, in 9 cases the eastern US, while in 2 cases Alaska weather forecasts were supported by the flight missions.
targpap_Auto7

2. DATA IMPACT

The average sensitivity pattern for the 15 flight cases is displayed in Fig. 1targpap_Auto6. The flights, originating from Hawaii and Anchorage, Alaska, could reasonably sample the sensitive area in most cases. All dropsonde data were used operationally. To evaluate the impact of the targeted data, a parallel (control) data assimilation and forecast cycle, from which all dropsonde data were excluded, was run and tested against the operational analysis/forecast results throughout the field program. Fig. 1 shows what impact the assimilation of extra dropsonde data at T62 model resolution made in addition to using all other operationally available data. In the most sensitive and best observed areas the average initial impact in surface pressure exceeds 0.6 hPa.
The average location of the verification region for the 14 west coast cases is shown as an ellipsoid on the bottom panel of Fig. 1. The average verification lead time for these cases is 36 hours. The same panel also indicates where the average targpap_Auto5targpap_Auto4control 36-hour forecast error is large. Note the local maximum in surface pressure forecast error within the verification area, cofirming that the weather events selected by the forecasters were associated with a large degree of uncertainty. The maximum impact of the extra data at 36 hours lead time (difference between forecasts initialized with and without the dropsonde data), as seen from the same figure, reaches the verification region, and has its largest values (1.1 hPa) over the area of largest forecast errors. Note that the average impact of the data has a magnitude that is roughly one third of the forecast error itself.targpap_Auto3

3. VERIFICATION OF TARGETED FORECASTS

In the previous section we saw that the targeted dropsonde data had a substantial impact on the targeted forecasts. Here we will explore whether this impact is positive, i. e., whether the error in the targeted forecasts is reduced compared with the control forecasts.targpap_Auto2targpap_Auto1
The operational, targeted forecasts and the control forecasts are verified objectively using surface pressure (Fig.2) and wind (Fig. 3) observations at verification time and within the 1000 km radius verification regions. The 24-hour accumulated precipitation forecasts were evaluated subjectively by considering the amount and timing of precipitation events. Combining the three different verification statistics, the overall results indicate that in close to 80% of the verification cases, the targeted data improved the quality of the critical weather forecasts (Table 1). In particular, 18 out of the 23 cases that can be evaluated the forecast error was reduced. This result is statistically significant at the 0.5% level.

4. A CASE STUDY

In Fig. 4 shown are the control and operational targeted 24-36 hours lead time accumulated precipitation forecasts and the corresponding raingage-based analyzed amounts for the 12-hour period ending at 1200 UTC January 20, 1999. The shaded area indicates the impact of the data, which was to shift the area of maximum precipitation by increasing the predicted amounts around the area of maximum observed precipitation in central California. Note that both forecasts were run at T62 (approximately 220 km resolution) and therefore were unable to capture the orographically forced local maxima in the observed precipitation amount.

5. HISTORICAL PERSPECTIVE

In Fig. 5 we show the percentage of the control 48-hour surface pressure forecast error that was removed by the use of dropsonde data collected during the 15 flight missions. The maximum error reduction is in the 10-20% range and is well within the average location of the verification regions at 48-hour lead time. Also, the area of maximum error reduction coincides with the location of maximum control forecast error, attesting to the success of the targeting procedure.targpap_Auto0
A question can be asked regarding the significance of the 10-20% error reduction. To address this quesiton we use a figure from Kistler et al. (1999, Fig. 6). Reanalyses and reforecasts were created, using the same NCEP analysis/forecast system, for the past few decades. In this setup, the forecasts can improve with time only due to imporvements in data quantity and/or quality. As we can see from the figure, it took approximately 25 years of efforts through the introduction of new observing platforms etc. to reduce the 2-day 500 hPa height errors over the Northern Hemisphere extratropics by 10%. During the Winter Storm Reconnaissance program, the same level of error reduction was achieved over the selected critical verification regions by disposing on the order of 30 dropsondes on every other day.

6. CONCLUSIONS

Based on the positive targeting experience accumulated during the FASTEX (Szunyogh et al., 1999a), NORPEX (Szunyogh et al, 1999b), CALJET (Ralph et al., 1999) and WSR99 (Szunyogh et al., 1999c) field programs, NCEP plans to fully operationalize the Winter Storm Reconnaissance program, using the NOAA G-lV and USAF C130 planes.
In preparation for a full operational implementation, for some WSR99 missions special sensitivity calculations were also prepared, in addition to the traditional sensitivity charts (Fig. 7, top panel). These special calculations provide an estimate of expected data impact resulting from using a number of preselected flight patterns (Fig. 7, bottom panel). In the example shown the largest forecast impact is expected from flight number 23, which covers the central area of sensitivity as displayed on the traditional senistivity chart in the upper panel of Fig. 7. The use of predesigned flight tracks and associated track specific sensitivity calculations will facilitate the transition process of the targeting technology inbto NWS operations.

7. ACKNOWLEDGEMENTS

The 1999 Winter Storm Reconnaissance Program would not have been possible without the work of a large number of participants and collaborators. We would like to acknowledge the dedicated work of the NOAA G-lV (led by Sean White) and the USAF Reserve C-130 flight crews (coordinated by Jon Talbot). Coordination with the flight facilities was provided by CARCAH, led by John Pavone. The WSR program benefitted from a collaboration with the concurrently run SCATCAT (Severe Clear-Air Turbulence Colliding with Air Traffic) research program under the leadership of Cecile Girz (NOAA/ERL/FSL) and Mel Shapiro (NCAR). The European Centre for Medium-Range Weather Forecasts is credited for providing their ensemble forecast data to be used in the sensitivity calculations in real time. The forecast cases were selected in real time by NWS field offices and NCEP/HPC forecasters coordinated by David Reynolds. Mark Iredell, Jack Woollen, and Timothy Marchok provided valuable help with setting up the parallel analysis/forecast cycle, manipulating data, and creating graphics.

8. REFERENCES

Bishop, C. H., Z. Toth, 1999: Ensemble Transformation and Adaptive Observations. J.Atmos. Sci. 56, 1748-1765.

Kistler

Ralph, F. M., O. Persson, D. Reynolds, W. Nuss, D. Miller, J. Schmidt, D. Jorgensen, J. Wilczak, P. Neiman, J.-W. Bao, D. Kingsmill, Z. Toth, C. Velden, A. White, C. King, and J. Wurman, 1999: The California Land falling Jets experiment (CALJET): Objectives and design of a coastal atmosphere-ocean observing system deployed during a strong El Niņo. 3rd Symp. Integrated Observing Systems, 10-15 Jan. 1999, Dallas, TX, AMS 78-81.

Szunyogh, I., Z. Toth, K. A. Emanuel, C. Bishop, C. Snyder, R. Morss, J. Woolen, and T. Marchok, 1999a: Ensemble-based targeting experiments during FASTEX: The impact of dropsonde data from the LEAR jet. QJRMS, in print.

Szunyogh, I., Z. Toth, S. Majumdar, R. Morss, C. Bishop, and S. Lord, 1999b: Ensemble-based targeted observations during NORPEX. Preprints of the 3rd Symposium on Integrated Observing Systems,10-15 January 1999, Dallas, Texas, 74-77.

Szunyogh, I., Z. Toth, S. Majumdar, R. Morss, B. Etherton, and C. Bishop, 1999c: The effect of targeted dropsonde observations during the 1999 Winter Storm Reconnaissance program. Mon. Wea. Rev., under review.