Zoltan Toth1, Istvan Szunyogh2,
Sharan Majumdar3, Rebecca Morss4, Brian Etherton3, Craig Bishop3, Stephen Lord,
Marty Ralph5, Ola Persson5, and Zhao-Xia Pu6
Environmental Modeling Center, NCEP, NWS/NOAA
Washington DC 20233
During the past three winters NCEP participated in four field programs concerning targeted observations, with the aim of improving short-range weather forecasts by adaptively taking dropsonde observations in sensitive areas. In the first field program (FASTEX, January-February 1997, Joly et al., 1999; Szunyogh et al., 1999a) the goal was to test different targeting methods in the field. Based on the FASTEX results NCEP chose to further test the Ensemble Transform (ET) technique (Bishop and Toth, 1996; 1999) in upcoming field programs. In January-February 1998 (NORPEX, Langland et al., 1999; Szunyogh et al., 1999b) and March 1998 (CALJET, Ralph et al., 1999) NCEP's goal was to evaluate the performance of an operationally feasible targeting methodology based on the use of the ET technique, both on the synoptic (NORPEX) and on the meso-scales (CALJET).
In January-February 1999 took place the 1999 Winter Storm Reconnaissance program (WSR99; Toth et al., 1999; Szunyogh et al., 1999c). This was a quasi-operational program designed to improve 24-96 hour forecasts of significant weather events over the continental US, including Alaska. The weather events (time and location) for which the forecasts were to be improved were selected by operational forecasters in field offices, coordinated by HPC. Sensitivity calculations were then carried out to find the area over the northeast Pacific from where extra data could most influence and improve the preselected forecast events. All adaptive data were used in the operational analysis/forecast cycle. The impact of the data was evaluated in near real time by using a parallel analysis/forecast cycle from where all adaptive data were excluded. As a new feature, the ET senstivity calculations were modified to directly identify the best flight track, out of a set of predesigned choices. With this development the targeted observation decision making algorithm can be fully automated.
In this paper we review important aspects of the targeting methodology: case selection, the performance of the targeting technique, and forecast improvement, based on yet unpublished results.
2. CASE SELECTION
Case selection should consider (1) the likelihood of threatening weather events (large amounts of precipitation, strong low level winds, etc), and (2) the information content of the forecasts. Weather events that have a potential for large societal impact and that are associated with relatively large uncertainty should be targets to be improved by adaptive observations. Practicing forecasters have a good understanding of these two factors since they are concerned with them on a daily basis.
A good source of information in this respect can be an ensemble of forecasts from which the likelihood of a particular event, and the overall information content (or uncertainty) in the forecast can be readily evaluated. In Fig. 1 we present a 24-hour PQPF forecast example for a large precipitation amount case. Even at longer, 5-8 day lead times the ensemble-based PQPF forecast gave relatively high (around 50%) probability values for an inch or more precipitation in this high predictability case. Consequently there is a threat in the forecast of a significant event occuring. Even at 3-4 day lead time, however, there were large areas where the probabilities were not close either to 1 or 0. In those areas of intermediate probability values, the forecast uncertainty is hiogh and the forecast information content is relatively low. Therefore, there is a need for adaptive observations to be taken. Based on probabilistic forecasts derived from an ensemble we plan to develop objective guidance products for use in case selection for targeting observations to aid the case selection work of practicing forecasters.
Of primary concern when using a targeting algorithm is whether the forecast impact of the data that are taken with the aid of an objective targeting algorithm appears where expected: in the preselected verification area, associated with the weather event for which the forecast is to be improved. In other words: Is there a large signal (difference between two forecasts started from analyses with and without the use of the targeted data) within the verification area? For the ET targeting technique used at NCEP, this was tested in a total of 52 cases. As seen from Table 1, out of the 48 (40) real targeted cases, in 39 (35) cases the absolute maximum in the surface pressure (precipitation) signal did occur in the verification region at the verification time. There were 6 (3) cases when a local maximum occured, and only 3 (2) cases when no maximum was observed in the surface pressure (precipitation) signal within the verification region.
As an example, we present in Fig. 2 the initial and final impact plots for all six March 1999 CALJET flights. Note that the ET technique is not designed to center the signal into the verification area: instead, it is designed to maximize the impact within it. It follows that the lack of a local maximum within the verification area can be consistent with the ET technique. What the method tells us is that the largest forecast signal within the verification region is expected from data taken from the identified most sensitive region. Data taken at other places may create a forecast signal that is centered around the verification region but the absolute magnitude of that signal is expected to be below the one originating from the most sensitive area.
Let us consider now whether this is the case with the 19990323 CALJET flight data (see Fig. 2d), where the largest forecast impact of the data appears northwest of the verification area, around 47.5N, 130W. The ET technique, beyond identifying the most sensitive area, can also be used to estimate the forecast impact of the data in advance, and in fact this calculation indicated that the largest impact of the data should be expected to occur northwest of the verification area (Fig. 3, around 45N, 130-135W). This is indeed in the vicinity where the actual data impact is observed (Fig. 2d). This confirms that the ET sensitivity results are consistent with the actual observed evolution of the data impact. Similar results were found in most other cases when the absolute maximum of the signal missed the verification region.
The above results are confirmed by twelve data denial experiments performed in seven FASTEX cases. Traditional data were withdrawn from areas of moderate sensitivity and it was found that the impact is significantly below that from the most sensitive areas (Szunyogh et al., 1999a).
Targeting methods other than the ET technique were also tested during FASTEX. In fact there were three cases when adaptive data were collected based on other adaptive techniques in areas that were virtually insensitive according to ET sensitivity calculations. In addition, the 19990313 CALJET flight that was designed subjectively to explore areas of synoptic significance went into an area found insensitive by the ET technique. These four cases offer an opportunity to test whether there is any impact from areas found non-sensitive by the ET technique. The individual results can be seen in Fig. 4 and Fig. 2b. It is clear from the figures that in all four cases the signal reaches much higher values outside of the verification region than inside it. In Fig. 5 we compare what impact the data had within the verification region, when taken in highly sensitive vs. non-sensitive situations. As Fig. 5 indicates, the impact of data collected in areas found non-sensitive by the ET technique is negligeble when compared to that from real sensitive areas. These results confirm that the ET technique is capable of differentiating among strongly and moderately sensitive and insensitive areas.
4. FORECAST IMPROVEMENT
For practical applications of an adaptive observation strategy, forecast improvement is equally important to placing the data impact at the right place, at the right time. Whether the forecats improve due to the targeted data depends primarily on the quality of the assimilation procedure. It is a challanging (and never before tested) task for the analysis to use isolated patches of targeted data to its full potential. In fact, since the assimilation schemes are statistical in nature, it is never guaranteed, no matter how good an analysis scheme is, that good quality extra data would improve the forecasts in every single case. What we can expect is that, as the assimilation schemes get more advanced, a higher proportion of targeting cases would be associated with improved forecasts.
Since the first adaptive field program, FASTEX, the SSI 3-dimensional assimilation scheme went through several important changes (Derber et al., 1998). Looking at the verification summary statistics in Table 1 we can see that these changes apparently had a positive impact on the performance of targeted analyses and forecasts in later years. While during FASTEX only in 3 out of 5 cases that could be evaluated did we see a positive impact from assimilating the targeted data, the ratio changes to 28 to 37, 25 to 35, and 20 to 22 for the last three field programs for surface pressure, tropospheric wind and accumulated precipitation observations. Overall, 29 out of 34 (or 85 % of the) cases showed a clearly positive data impact on forecast quality. This result is statistically highly significant.
As an example, the five March 1999 CALJET cases, when the objective ET targeting technique was used, are evaluated in Fig. 6. Shown is the rms error in forecasts run from analyses made with and without the targeted data, using observed surface pressure within the 500-km radius preselected verification regions for verification. In all five cases the adaptive observations had a small but measurable positive impact on the forecasts.
As a synoptic example we present a WSR99 case in Fig. 7. Large, up to 6 hPa surface pressure error is observed in the forecast made without the use of targeted data over northern Washington state. The error is reduced to around 2 hPa in the forecast that started from an analysis that ingested the targeted data. The impact of the data is clearly within the verification region, reducing the errors over an extensive area by 40-80%.
We conclude with a discussion of a figure (Fig. 8) borrowed from Toth et al., 1999. This figure can be used to highlight the three main concerns of any adaptive observation strategy, discussed above: (1) case selection, (2) the performance of the targeting technique, and (3) forecast improvement. In this figure shown are summary results for the fiveteen 1999 Winter Storm Reconnaissance program cases. All results are for 48-hour forecasts; the average location of the verification region at this lead time is shown as a dashed ellipsoid. First, the fact that the surface pressure forecast error, averaged for the 15 cases (contour lines) has a strong local maximum within the verification region attests that it is possible to identify problematic forecast events in an operational fashion. Second, the maximum of the average impact of the adaptive data (shaded area) is within the verification region and overlaps the area of locally largest forecast error, attesting that successful targeting is operationally possible. And third, the impact of the data (rms error reduction up to 10-20%) is positive, demonstrating that the data assimilation method that is currently operational at NCEP is capable of utilizing information contained in the adaptive data. The 10% regional error reduction for the most significant forecast events is comparable in size to a 10% error reduction over the Northern Hemishpere extratropics due to overall improvements in data quality and quantity of the observing system that occured over the most recent 25-year period (Kistler et al., 1999; Toth et al., 1999).
Based on the positive results of the past field experiments NCEP is planning to fully operationalize adaptive observations in the framework of the Winter Storm Reconnaissance program.
The 1999 Winter Storm Reconnaissance Program and the other field experiments would not have been possible without the work of a large number of participants and collaborators. Specifically we would like to acknowledge the dedicated work of the NOAA G-lV (led by Sean White) and the USAF Reserve C-130 flight crews (coordinated by Jon Talbot). Coordination with the flight facilities was provided by CARCAH, led by John Pavone. Our work at various stages benefitted from a collaboration with Kerry Emanuel (MIT), Robert Gall (USWRP), Eugenia Kalnay (Univ. Maryland), David Reynolds (NCEP/HPC), Chris Snyder (NCAR), Mel Shapiro (NCAR), Cecile Girz (NOAA/ERL/FSL), Ron Gelaro and Rolf Langland (NRL). The European Centre for Medium-Range Weather Forecasts is credited for providing their ensemble forecast data to be used in the sensitivity calculations in real time. Mark Iredell, Jack Woollen, and Timothy Marchok provided valuable help with computational issues, data manipulation, and graphics.
Bishop, C., and Z. Toth, 1996: Using ensembles to identify observations likely to improve forecasts. Preprints of the 1 1 th AMS Conference on N u merical Weather Prediction, 1 9-23 August 1996, Norfolk, Virginia, p. 72-74.
Bishop, C. H., Z. Toth, 1999: Ensemble Transformation and Adaptive Observations. J. Atmos. Sci., 56, 1748-1765.
Joly, A., K. A. Browning, P. Bessemoulin, J.-P. Cammas, G. Caniaux, J.-P. Chalon, S. A. Clough, R. Dirks, K. A. Emanuel, L. Eymard, F. Lalaurette, R. Gall, T. D. Hewson, P. H. Hildebrand, D. Jorgensen, R. H. Langland, Y. Lemaitre, P. Mascart, J. A. Moore, P. O. G. Persson, F. Roux, M. A. Shapiro, C. Snyder, Z. Toth, R. M. Wakimoto, 1998: Overview of the field phase of the Fronts and Atlantic Storm-Track Experiment (FASTEX) project. QJRMS, in print.
Kistler, R., E. Kalnay, W. Collins, S. Saha, G. White, J. Woollen, M. Chelliah, W. Ebisuzaki, M. Kanamitsu, V. Kousky, H. van den Dool, R. Jenne, and M. Fiorino, 1999: The NCEP/NCAR 50-year Reanalysis. Bull. Amer. Meteor. Soc., under review.
Langland, R. H., Z. Toth, R. Gelaro, I. Szunyogh, M. A. Shapiro, S. Majumdar, R. Morss, G. D. Rohaly, C. Velden, N. Bond, and C. Bishop, 1999: The North-Pacific Experiment (NORPEX-98) Targeted observations for improved North merican Weather Forecasts. Bull. Amer. Meteorol. Soc., 80, 1363-1384.
Ralph, F. M., O. Persson, D. Reynolds, W. Nuss, D. Miller, J. Schmidt, D. Jorgensen, J. Wilczak, P. Neiman, J.-W. Bao, D. Kingsmill, Z. Toth, C. Velden, A. White, C. King, and J. Wurman, 1999: The California Land falling Jets experiment (CALJET): Objectives and design of a coastal atmosphere-ocean observing system deployed during a strong El Niņo. 3rd Symp. Integrated Observing Systems, 10-15 Jan. 1999, Dallas, TX, AMS 78-81.
Szunyogh, I., Z. Toth, K. A. Emanuel, C. Bishop, C. Snyder, R. Morss, J. Woolen, and T. Marchok, 1999a: Ensemble-based targeting experiments during FASTEX: The impact of dropsonde data from the LEAR jet. QJRMS, in print.
Szunyogh, I., Z. Toth, S. Majumdar, R. Morss, C. Bishop, and S. Lord, 1999b: Ensemble-based targeted observations during NORPEX. Preprints of the 3rd Symposium on Integrated Observing Systems,10-15 January 1999, Dallas, Texas, 74-77.
Szunyogh, I., Z. Toth, S. Majumdar, R. Morss, B. Etherton, and C. Bishop, 1999c: The effect of targeted dropsonde observations during the 1999 Winter Storm Reconnaissance program. Mon. Wea. Rev., under review.
Toth, Z., I. Szunyogh, S. Majumdar, R. Morss, B. Etherton, C. Bishop, and S. Lord, 1999: The 1999 Winter Storm Reconnaissance Program. Preprints for the 13th AMS Conference on Numerical Weather Prediction, 13-17 September 1999, Denver, CO, 27-32.
2 UCAR Visiting Scientist at EMC/NCEP; 3.Pennsylvania Sate University, State College, PA; 4. MIT, Boston, MA; 5.NOAA/ERL, Boulder, CO; 6. USRA at NASA/GSFC, Greenbelt, MD