Zoltan Toth1, Istvan Szunyogh2,
Craig Bishop3, Sharan Majumdar3, Rebecca Morss4, and Stephen Lord
Environmental Modeling Center, NCEP, NWS/NOAA
Washington DC 20233


Numerical forecasts of chaotic systems like the atmosphere are limited by the use of imperfect models and imperfect initial conditions. As Lorenz (1963, 1969) pointed out, any minor discrepancies between either the natural system itself and its model used for making predictions, or the actual vs. the analyzed state of the system at the initial time of a forecast will lead to a loss of predictability within a finite time period. How large these discrepancies are will determine in general the time period for which useful forecasts can be made. Improving numerical models on one hand, and the analysis of the atmosphere used as initial conditions on the other are hence the two basic avenues through which progress is made in Numerical Weather Prediction (NWP).

As for the atmospheric analyses, their quality depends on two main factors. First, the quality of the analysis is limited by the technique through which the analysis is derived. Both the NWP model used to generate first guess fields for the analysis, and the statistical estimation method that combines information from the first guess and observations is built using approximations due to limits in our knowledge and computational capabilities. The second dominant factor regarding the quality of the analysis is data coverage. Observations providing better geographical coverage, higher quality and/or more comprehensive data should lead to improved atmospheric initial conditions and in turn improved NWP forecast guidance.

The global rawinsonde network has provided atmospheric observations at regular intervals and at fixed locations (regular observations). This system served well the interests of climatologists, synoptic forecasters, and early NWP forecast systems where data were assimilated every 12 or 6 hours. More recently, different in situ and remote platforms were either developed for the purpose of taking atmospheric measurements, or equipped with sensors. Many of these new platforms provide data at synoptic times and at varying locations. With today's advanced data assimilation systems like ECMWF's 4-DVAR (Rabier et al. 2000) or NCEP's 3-DVAR technique (Derber et al. 1998), these opportunity driven observations can be used just as well as measurements made at synoptic times and at fixed locations.

Moreover, a simple model study by Lorenz and Emanuel (1998) indicates that observations spaced irregularly in space and time may lead to higher quality analyses. This is because when measurements are made at the same location, often the same synoptic system is sampled at consecutive times. Choosing instead to observe at random locations at each analysis time provides a better overall coverage for the state of the atmosphere. These results were confirmed by Morss et al. (2000), using a more realistic analysis/forecast environment.

Choosing observational sites in a random fashion is just one possible way of departing from the regular, or opportunity driven observing strategies. In this paper the approach of adaptive observations will be discussed. After a general description of the approach (section 2), an overview of NCEP's contributions to related research and field programs will be presented (section 3). Section 4 describes how the adaptive observation strategy is being operationally implemented at the US National Weather Service (NWS), while section 5 and 6 offer a summary and discussion.


2.1 Definitions

The adaptive observational strategy is defined here as an approach where the location, time, and/or observed variable is actively chosen in order to optimize the quality of NWP guidance. Optimization is interpreted here in the broadest sense, including measures like the global performance of short range forecasts in general. Targeted observations represent a subcategory under adaptive observations where data collection is optimized to improve a particular forecast aspect, like a 3-day precipitation forecast over the northeast US on a given day.

The regular and opportunity driven observations constitute the backbone of the global observing network and they are expected to do so in the foreseeable future. Adaptive, and more specifically, targeted observations can serve as an enhancement to the basic network. The remainder of this study will focus on targeted observations in this, supplementary role.

2.2 Historical perspective

Adaptive observations has a relatively long history within NOAA. The Hurricane Reconnaissance (HR) program, in which NOAA and US Air Force (USAF) planes were initially tasked to collect critical information on the location and intensity of hurricanes, started in 1947. In 1982, NOAA's Hurricane Research Division, then National Hurricane Research Laboratory, began research flights in the data-sparse regions around tropical cyclones in order to improve numerical model forecasts of their tracks. Papers back to 1920 (Gregg 1920; Bowie 1922) suggest that observations to the northwest of the tropical cyclone center are most important for subsequent forecasts (Franklin et al. 1996). This was confirmed during subjectively planned synoptic flow missions. Burpee et al. (1996) found that such flights allowed for an improvement in hurricane track forecasts of approximately 25%, and as a result, NOAA procured the Gulfstream-IV (G-lV) plane for operational synoptic surveillance flights for hurricanes threatening landfall in the United States and its territories east of the dateline.

Hurricane related adaptive observational work has been limited to the tropics and subtropical areas and had been, until recently, based on subjective techniques. Objective targeted observational techniques were first developed, and considered for extratropical use within the FASTEX field program (Joly et al. 1997). Following a workshop (Snyder 1996), various groups developed and applied targeted observational strategies that were later used in FASTEX and follow-up field programs (Buizza and Montani 1999; Gelaro et al. 1999; Bergot et al. 1999; Szunyogh et al. 1999a). Before further discussing the field programs and the results related to the use of the strategy followed by NCEP, the key concepts of targeted observations are discussed.

2.3 Key concepts

Case selection. Since resources are limited, forecast cases for which extra targeted observations are to be collected need to be selected carefully. Forecasts of events with potentially large societal impact are prime candidates to be selected, given there is substantial uncertainty in these forecasts (i. e., threatening events are predicted with probabilities considerably lower than 1). Probabilistic forecasts based on an ensemble can serve as a guidance in case selection. The event is identified by its time (verification time), location (e. g., latitude and longitude for center of verification region), and critical variable(s) (verification variable, like accumulated precipitation, and/or low level winds).

Targeting technique. The identification of a feature for which forecasts are to be improved is only the first step in the process of targeted observations. Next, the location, time, and type of observations to be taken need to be identified. In practice, given information on the case selected, a future targeted observation time is typically chosen based on practical considerations (deployment limitations, etc.). The type of observation is also determined by technical factors. It follows that the primary concern of targeting is the identification of a region, and within it a pattern that can optimize the effect of targeted observations with respect to the selected forecast feature.

Several techniques have been developed with the aim of providing this information: Singular Vectors (e. g., Gelaro et al. 1999), adjoint sensitivity calculations (e. g., Langland and Rohaly 1996), the linear inverse technique (Pu et al. 1997), and the ensemble transform (ET) technique (Bishop and Toth 1996; 1999), later further developed as the ensemble transform Kalman filter (ETKF, Bishop et al. 2000). The first three methods are based on linear tangent approximations while the ET and ETKF techniques, which are two closely related methods, use a linear approach within the space spanned by nonlinear ensemble forecasts. Each method uses different approximations and has different limitations. Based on comparative results and practical considerations NCEP chose to pursue the use of the ensemble based technique.

Both the ET and ETKF techniques, developed in a collaborative effort between Pennsylvania State University (PSU) and NCEP scientists, are based on the assumption that linear combinations of nonlinear ensemble perturbations can well describe the nonlinear evolution of such perturbations. The ensemble forecasts are linearly combined in such a way that the variance in the transformed ensemble approximates analysis error variance under various hypothetical observing network configurations at the future targeted observation time. Different linear combinations are determined corresponding to the various alternative placements of targeted observations. The variance in these transformed ensembles is meant to differ (i. e., be lower) at the targeted observation time only over the area where adaptive observations are assumed to be taken. The variance in the transformed ensembles, using the same linear weights identified for the targeted observation time, are evaluated then at verification time, within the verification region, using the verifying variables. Out of all tested possible deployments the one that reduces the variance in the transformed ensemble within the verification region the most is selected as the optimal choice for the specific targeted application (Fig. 1)albatarg_Following Anchor0.

One of the advantages of the ET technique is that it is computationally inexpensive since all calculations are performed in the subspace of a small (typically 10-70 member) ensemble of forecasts. The use of a small ensemble constitutes the method's most severe limitation, too: the degrees of freedom in which the adaptive observational problem is solved is effectively limited by the size of the ensemble. Nevertheless, as will be shown below, the ET technique can provide useful basic information regarding the expected location and size of the impact of localized observations in an analysis/forecast system.

Data collection. Of most use to targeted observations are platforms that can provide observations at controllable locations. Examples of such platforms are unmanned aerial vehicles (UAVs), energy intensive satellite observations (like the proposed LIDAR wind measurements), and dropsondes released from manned aircraft. So far it is this last platform that has been used in targeted observational work.

Assimilation of data. The final step in targeted observations is the assimilation of the targeted data, along with those available from the regular and opportunity driven part of the observing network. The impact of the data is usually evaluated by running a control analysis/forecast cycle in parallel that differs from the operational cycle only in that it excludes all targeted data. The difference between the operational (including targeted data) and parallel (control) analysis and forecast fields thus reveals the effect of the targeted data. Extracting useful information from geographically localized data is a demanding task for current analysis systems.


3.1 Field programs

During the past four winters NCEP participated in five field programs concerning targeted observations, with the aim of improving short-range weather forecasts by adaptively taking dropsonde observations in sensitive areas.

FASTEX. The concept of targeted observations was first tested in the field during the Fronts and Atlantic Storm-Track Experiments (see Joly et al. 1999, and references therein). FASTEX was a multipurpose international field experiment conducted in January-February 1997 with the principal aim of studying the life cycle of cyclonic waves forming on a jet. The field program offered an opportunity to test and compare four different targeting methods, one of them being the ET technique used at NCEP (Szunyogh et al. 1999a; Toth et al. 1998), as applied by five different research groups.

NORPEX. The North-Pacific Experiment was a program organized by the US Navy (NRL) and NCEP with the aim of further testing the targeted observation concept in a more controlled environment. Primarily it was a research program designed for further testing and comparing two of the targeting techniques used in FASTEX: the SV and the ET techniques (Langland et al. 1999; Szunyogh et al. 1999b). Its secondary goal was to provide valuable data for operational applications during January-February 1998, when exceptionally heavy storms hit the west coast of the US due to an intense El Nino event.

CALJET. The California Land-falling Jets Experiment was organized to test the utility of real-time experimental data collected along and in near off-shore areas of the west coast for short range weather warnings (Ralph et al. 1998). The program gained special significance due to the heavy storms caused by the extreme El Nino event during January-March 1998. The program was augmented by five targeted observations flights in March 1998. These flights, designed by the NCEP-PSU targeting team in collaboration with CALJET scientists (Toth et al. 2000), represent the first application of targeted observations on mesoscale events, with a forecast window (elapsed time between targeted observation time and verification time) of 12-24 hours (compared to typically 36-96 hour windows for synoptic scale targeting).

WSR99. The Winter Storm Reconnaissance program in 1999 was a first in a series of quasi-operational programs designed to provide targeted data for operational NWP forecast applications. WSR is a NWS program where the events to be targeted are identified in real time by operational NWS weather forecasters over the continental US and Alaska, in the 24-96 hours lead time range. For targeting, the ET and ETKF techniques are used, and the flights are carried out by NOAA's Aircraft Operations Center (AOC) and the US Air Force Reserve. The primary goal of the WSR99 program (Szunyogh et al. 2000; Toth et al. 1999) was to provide data for improving real time operational forecasts for significant wintertime weather events. Testing and further development of operationally applicable targeting methods was a secondary goal of the program.

WSR00. WSR00 represented another step to bring targeted observations into an operational framework. The targeting tools developed in earlier years were tested before final operational implementation (Szunyogh et al. 2001). All decisions and results were made available in near real time through a dedicated web site at:

The ET and ETKF targeting calculations in these field programs were made and evaluated by a team of collaborating scientists from NCEP, PSU and MIT (Toth et al. 2000). Details about the application of the ET and ETKF methods are given in Table 1. Note that in WSR00 the ETKF technique was applied on the combined NCEP (14 perturbed members, Toth and Kalnay 1997) and ECMWF (50 members, Buizza et al. 1998) ensembles for the first time.










850, 500, 250 hPa streamfunction

850, 500, 250 hPa streamfunction

1000 km radius




850, 500, 250 hPa winds

850, 500, 250 hPa winds

1000 km radius




850, 500 hPa winds

850 hPa winds + 12-hr accum. precip

500 km radius




850, 500, 250 hPa winds

850, 500, 250 hPa winds

1000 km radius




850, 500, 250 hPa winds

850, 500, 250 hPa winds

1000 km radius

Table 1. Technical details of the application of the ET and ETKF techniques in field programs.

3.2 Results

In this section targeting results are summarized for all cases when the ET and ETKF techniques were used (except three FASTEX cases that have not been evaluated yet, and 2 additional FASTEX cases which have not been evaluated in terms of forecast improvement, one of them because of lack of data), based on earlier publications (except the addition of a FASTEX targeting case not studied by Szunyogh et al. 1999a). Altogether 54 targeted observations aircraft missions and 12 related test experiments (where traditional data are used from moderately sensitive areas) are evaluated regarding targeting results and forecast improvements. Of the 54 missions, 4 were "null cases" during FASTEX and CALJET where observations were collected in areas where other methods indicated sensitivity but the ET technique showed practically none. The WSR missions often targeted more than one forecast feature at various lead times. The remaining 50 missions hence were carried out to improve NWP forecasts for 71 forecast cases (of which 69 are verified). When evaluating the results we should keep in mind that targeting is a statistical problem. Consequently the techniques used (sensitivity calculations, atmospheric analysis procedure) are also statistical in nature. For example, the exact error in the future analyses and forecasts considered are not known and hence can only be statistically estimated. Therefore targeting results should follow expectations also in a statistical sense only, over a number of cases, and not necessarily for every individual case.

Targeting. A necessary but not sufficient condition for successful targating is that the signal (difference between analyses/forecasts generated by the operational and parallel analysis/forecast cycles, showing the effect of the targeted data) reaches the verification region at the verification time. Out of the 71/63 cases evaluated in this fashion for surface pressure/12-hr accumulated precipitation, the absolute (a local) maximum of the signal was within the predefined verification region in 47/42 (16/14) cases (Table 2). Note that in the majority of the remaining 8/7 cases the signal also reached the verification region but exhibited no local maximum.




# cases

surf. pres.


surf. pres.






7 (8)



3-2 (6)


3-2 (6)




7 (8)







3 (4)

4 (5)







21 (25)

24 (25)



13-2 (23)




8 (18)

7 (18)














47 (63)

42 (56)






66 (89)

67 (89)





Table 2. Targeting and forecast verification results for the FASTEX, NORPEX, CALJET, WSR99 and WSR00 field programs.

Note the relatively large number of WSR00 cases when the maximum signal was outside of the verification region. These results are due to a flow regime in which signal propagation was different from earlier winters in a sense that the maximum of the signal averaged over all 12 WSR00 cases remained over the northeast Pacific ocean. This feature of the signal was well predicted by the ETKF technique and thus was not unexpected.

Recall that targeting calculations are expected to identify areas from where extra observations have the largest impact on a particular forecast feature in a statistical sense and not the area from where the absolute maximum of the signal would necessarily reach the verification region. If the two areas do not coincide, as was apparently the case during WSR00, the signal (measured within the verification region at verification time) from the latter area is expected to be smaller than that from the former, most sensitive area.

Though having a large or maximum signal within the verification area is a necessary condition, it is not sufficient for good targeting. Another requirement is that initial signals from areas other than those identified by the technique as most sensitive have less impact within the verification region. This has been demonstrated by Szunyogh et al. (1999a, Figs. 6 and 7 there) through 12 data denial experiments in case of seven FASTEX cases where the impact of regular radiosonde data from moderately sensitive areas was shown to be significantly smaller than that of the targeted dropsonde data from the most sensitive regions.

Moreover, in case of three additional FASTEX, and a CALJET case, it was also shown that when data are assimilated from areas that are found non-sensitive by the ET technique, the forecast signal within the verification region is much smaller than that from moderately or highly sensitive areas (Toth et al. 2000). These results indicate that the ET technique is a viable tool for predicting the impact of extra observations and hence can be used for targeting observational areas for maximum forecast impact.

Forecast improvement. Whether the signal reaches the verification region is obviously a critical question for targeting. An even more important question is how often, and how much the forecast error is reduced. If extra data are used in a well formulated analysis, forecast errors should be reduced in a statistical sense. Failures (i. e., forecast degradation) in individual cases, however, are expected to occur but less often with improved analysis schemes which can extract more useful information from the extra data.

Targeted verification. Targeted and control forecasts were first verified against observational data within the predefined verification regions, at verification time. For surface pressure and wind forecasts radiosonde and surface pressure observations were used by an objective algorithm, whereas 12-hour accumulated precipitation forecasts were verified subjectively using raingage measurements. Note that only the NORPEX and WSR99 cases were evaluated in terms of precipitation forecast improvements and that only wind forecasts were evaluated for FASTEX. The verification results in Table 2 were compiled based on results reported in earlier studies (Szunyogh et al. 1999a; Szunyogh et al. 1999b; Toth et al. 2000; Szunyogh et al. 2000; Szunyogh et al. 2001).

Table 2 lists the overall verification results, indicating how often the targeted or the control forecasts performed better. The summary score is positive for the targeted (or control) forecast when the majority of the three individual measures indicate superior performance. As for overall results for the three individual measures, the targeted forecasts reveal an error reduction over 61-70% of all cases, with only 6-29% of cases showing a degradation in skill. The summary measure indicates that in more than two thirds of the cases (68%) the assimilation of targeted data improved the overall quality of the forecasts, while only in 14% of the cases the forecast quality was negatively impacted. These results demonstrate, at a very high level of statistical significance (P=0.0002 for surface pressure, and 10-9 or lower probability level for the other variables), the positive value of targeted observations in improving the quality of targeted forecasts.

It is interesting to note in Table 2 that the impact of the targeted data on the quality of forecasts was more positive during field programs after FASTEX. This was attributed by Szunyogh et al. (2000) to significant improvements in the operational analysis scheme at NCEP (Derber et al. 1998) which apparently enabled the scheme to better handle isolated patches of data collected in subsequent targeting programs.

To illustrate how the three main aspects of the adaptive observational approach followed at NCEP performed in the WSR00 program, the surface pressure error in the control forecast, and the percentage of the RMS error reduction due to the use of targeted data, averaged for the 12 WSR00 missions, are shown in Fig. 2. The three panels of Fig. 2 also indicate the averaged location of the individual verification cases, at the approximate averaged verification times of 36 hrs for the 3 Alaskan and 7 west coast cases, and of 48 and 72 hrs for the 5 and 8 eastern US cases around these lead times respectively. Unlike the forecast error results related to observed data in Table 2, those in Figs. 2 (and later in Fig. 3) are defined with respect to NWP analyzed fields from the same control and targeting analysis cycles from which the forecasts are evaluated.

First we note in Fig. 2 that all four averaged verification areas are associated with large, locally maximum forecast errors. This indicates that based on ensemble forecast guidance it is possible to identify forecast features associated with large forecast uncertainty in real time.

Second, we note that for the first three cases with 48 hrs or shorter lead time there is a maximum in the forecast improvement field within the verification region at its averaged location. And the verification region at 72 hrs lead time is found in the middle of a larger area characterized by substantial impact. This in turn indicates that the targeting component of the adaptive approach works also reasonably well.

And finally, the shades in the forecast improvement chart indicate that in all four cases the impact of the data is positive on the quality of the forecasts within the verification regions. Similar results were reported by Szunyogh et al. (2000) and Toth et al. (2000) for the WSR99 program. Overall, the results in Fig. 2albatarg_At Anchor0 indicate that the use of the adaptively taken data had a substantial and positive impact on forecast quality within the predefined verification regions, in the areas of maximum forecast errors.

As the forecast signal created by the assimilation of targeted data over the northeast Pacific travels to the east, so does the maximum difference between the error computed for the control and targeted forecasts (Fig. 3)albatarg_Auto0. The maximum error reduction (23%) first reaches the Alaska region at around 36 hrs lead time, then the western half of the US (13%) and the eastern half of the US (8%) at around 48 and 72 hrs lead times respectively.


Based on the positive experience accumulated with the use of targeted observations during the past few years the WSR program is currently being operationally implemented at the NWS. Initially resources will permit the introduction of a 30-day observational period in mid-winter when destructive winter storms are most likely to develop. It is anticipated that in the future the program will be expanded to cover the whole winter season. Some details of the operational implementation are described below.

3.1 Case selection

Case selection will be coordinated during WSR programs on a daily basis by the Hydrometeorological Prediction Center (HPC) of NCEP, based on ensemble forecast guidance. HPC will compile a list of possible requests received from NWS Weather Forecast Offices (WFOs) across the continental US and Alaska, and subsequently prepare a prioritized national list of targeted forecast cases. Each case will be marked by the central location and verification time of the targeted weather event, along with a priority on a scale of low, moderate, and high. This procedure has been successfully used during the WSR99 and WSR00 programs.

3.2 Targeting

The prioritized list of cases will be sent to the Senior Duty Meteorologist's (SDM) desk at the NCEP Central Operations (NCO). The SDM desk is responsible, among other things, for monitoring and ensuring the quality of NCEP analysis and forecast products. SDM personnel will initiate targeting calculations that produce objective guidance for the selection of areas where the addition of extra observations is expected to improve the targeted forecasts most. These calculations will be carried out separately for each selected case. The primary output of these calculations is a chart indicating, at each gridpoint, the forecast error reduction within the verification region expected from adaptive observations taken in the vicinity of that gridpoint.

The same calculations are also carried out for a number of predesigned flight tracks, each associated with a different pattern of hypothetical dropsonde locations. The best track for a particular targeting case is easily identified from a bar chart as the flight with the largest expected data impact. During previous field programs these products have been routinely produced and used by a group of collaborating scientists at the Environmental Modeling Center of NCEP, PSU, and MIT.

3.3 Decision making

Based on the targeting results, SDM personnel, in consultation with HPC, will decide if on a particular day a flight is requested and if so which flight track(s) are selected. The decision will be based on (1) the priority of requested cases; and (2) the expected forecast impact/improvement. In addition to the above described products, the decision process is supported by a series of charts that depict the temporal/spatial evolution of the expected forecast impact of the data. Information on all requested cases need to be combined and considered before a final decision can be made. Beyond a flight decision for the next day, an outlook for the following day will also be prepared for advance planning purposes. During past field programs the decisions were made by the NCEP-PSU-MIT targeting team.

3.4 Data collection

As during past field programs, flight requests will be forwarded to CARCAH (Chief, Aerial Reconnaissance Coordination, All Hurricanes). CARCAH is then responsible for dispatching the requests to the Aircraft Operations Center (AOC) of NOAA, and the 3rd Weather Reconnaissance Squadron of the US Air Force who will carry out the requested dropsonde missions using a G-lV and C-130 planes respectively. The planes are based in Anchorage, Alaska, and Honolulu, Hawaii. The use of predesigned flight tracks (Fig. 4) greatly simplifies the decision making, communication, and flight planning process.albatarg_page0


The lack of adequate data coverage in the preparation of initial fields is one of the most serious limitations of NWP forecasts. Unfortunately the enhancement of atmospheric data coverage on a global scale is limited by financial and technical factors. In the past decades adaptive observations, where observations are taken over limited areas to enhance critical features in a forecast, was proposed and implemented in hurricane forecasting.

In the past few years objective techniques have been developed that can identify areas from where extra observations can potentially improve preselected forecast features. One of these tools, the Ensemble Transform Kalman Filter (ETKF) technique has been tested at NCEP in five adaptive observational field programs in the extratropics by a team of EMC, PSU, and MIT scientists. Results from these field programs indicate that:

1) Verification regions identified in real time with the aid of ensemble forecast guidance as areas of large forecast uncertainty are found to be associated with locally large (maximum) forecast errors.

2) The absolute (a local) maximum of the forecast signal due to the assimilation of targeted data originating from the area found most sensitive by the ETKF technique reaches the preselected verification region in 66% (89%) of the cases. The forecast signal from moderately sensitive (non-sensitive) areas is found to be 1.7 (4) times smaller than that from the most sensitive area.

3) In 68% of all cases the error in the targeted forecasts is reduced by the use of targeted data while degradation in forecast quality occurs only in 14% of all cases.

4) The average local error reduction in the verification regions due to the use of targeted data is above 20% at shorter lead times, dropping to around and below 10% at 72 hrs and longer lead time.

Based on these results the Winter Storm Reconnaissance program, a targeted observational program designed to improve forecast guidance for critical weather events over the continental US and Alaska, is currently being operationally implemented. This implementation takes place only six years after the first public discussion on the use of adaptive observations in the extratropics (Snyder 1996). Such a fast transition from research to operations would not have been possible without a far reaching collaboration among scientists and other participants from different agencies and institutes.


We note that over the past 25 years Northern Hemisphere extratropical 500 hPa height 2-day forecast rms errors showed a 10% reduction due to increases in data quality and volume (Kistler et al. 2001). Note that these years have seen the advent of satellite observations. The results in the present paper demonstrate that it is possible to achieve a similar, 10-25% error reduction over areas/cases selected based on the largest potential weather related threat to society, with the use of 20-25 dropsondes released adaptively every other or third day over the data sparse northeast Pacific ocean.

These results raise the possibility that the judicial use of the adaptive approach, in addition to the regular and opportunity driven part of the observational network, may bring substantial further benefits. The use of new observing platforms in the future, such as unmanned areal vehicles (UAVs), driftsondes, or space based LIDAR wind measurements may make the program more successful and/or more economical.

Some of the new proposed observing systems naturally offer themselves as candidate platforms for a global implementation of the adaptive approach. The potential benefits and possible pitfalls of such an application should be carefully investigated regarding its impact not only on weather forecasts but also on climate applications. The Hemispheric Observing System Research and Predictability Experiment (THORPEX), a US Weather Research Program initiative, will undoubtedly contribute to the exploration of these new possibilities.

The adaptive observational approach adopted operationally at the US NWS, though satisfies basic requirements, is a relatively simple approach that allows for many improvements. First, the procedure of case selection can be supported by the development of specific objective guidance products based on an ensemble of forecasts. These products would identify areas where threatening weather events (say more than an inch of precipitation) are associated with forecast probabilities in the medium (not close to 0 or 1) range.

Another, related improvement can be the introduction of the use of case specific norms to measure forecast uncertainty at verification time. Different observations may be required whether one is interested to improve the aviation forecast at high altitudes vs. reduce the uncertainty in precipitation forecasts. Further improvements could be achieved through refinements to the ET technique. In particular, the assumptions about the analysis impact of data within the ETKF technique are not correct when the method is used in conjunction with an analysis scheme other than an Ensemble Kalman filter.

Another shortcoming of the ETKF technique when applied with an ensemble that does not properly account for uncertainty associated with subgrid-scale processes is that it can seriously overestimate the potential of adaptive observations. While a connection between targeted observations at analysis time and the forecast impact at later verification time may exist, it may be far from deterministic, due to the effect of stochastic subgrid-scale processes that are present in nature but not necessarily accounted for in the ensemble used by the ETKF method.

When the chaotic processes operating in the atmosphere are not represented in the ensembles used, the ETKF technique may suggest a deterministic relationship between targeted observations at analysis time and the forecast impact at a (much) later verification time. In reality, though a connection may exist, it may be far from deterministic, due to the effect of stochastic subgrid-scale processes present in nature.


The Winter Storm Reconnaissance Program and the other field experiments would not have been possible without the work of a large number of participants and collaborators. Specifically we would like to acknowledge the dedicated work of the NOAA G-lV (led by Sean White) and the USAF Reserve C-130 flight crews (coordinated by Jon Talbot). Coordination with the flight facilities was provided by CARCAH, led by John Pavone. Our work at various stages benefitted from a collaboration with Kerry Emanuel (MIT), Robert Gall (USWRP), Eugenia Kalnay (Univ. Maryland), Zhao-Xia Pu (USRA), Brian Etherton and Jon Moskaitis (PSU), David Weinbrenner and Buzz Burek (SDM/NCO), David Reynolds (NCEP/HPC), Louis Uccellini (director of NCEP, formerly at NWS headquarters), Paul Hirschberg (NWS Headquarters), Naomi Surgi (NCEP/EMC), Marty Ralph (NOAA/ERL), Chris Snyder (NCAR), Mel Shapiro (NCAR), Cecile Girz (NOAA/ERL/FSL), Sim Aberson (NOAA/HRD), Ron Gelaro and Rolf Langland (NRL). David Burridge, director of the European Centre for Medium-Range Weather Forecasts, is credited for granting access to the ECMWF ensemble forecast data to be used in the sensitivity calculations in real time. Mark Iredell (NCEP/EMC), and Jack Woollen and Timothy Marchok (GSC) provided valuable help with computational issues, data manipulation, and graphics.


Bergot, T., G. Hello, and A. Joly, 1999: Adaptive observations: a feasibility study. Mon. Wea. Rev., 127, 743-765.

Bishop, C. H., B. J. Etherton, and S. J. Majumdar, 2000: Adaptive sampling with the Ensemble Transform Kalman Filter. Part I: Theoretical aspects. Mon. Wea. Rev., in print.

Bishop, C., and Z. Toth, 1996: Using ensembles to identify observations likely to improve forecasts. Preprints of the 1 1 th AMS Conference on N u merical Weather Prediction, 1 9-23 August 1996, Norfolk, Virginia, p. 72-74.

Bishop, C. H. and Z. Toth, 1999: Ensemble transformation and adaptive observations. J. Atmos. Sci. 56, 1748-1765.

Bowie, Edward H., 1922: Formation and movement of West Indian hurricanes. Mon. Wea. Rev., 50, 173-190.

Buizza, R. and Montani, A., 1999: Targeting observations using singular vectors. J. Atmos. Sci., 56, 2965-2985.

Buizza, R., Petroliagis, T., Palmer, T. N., Barkmeijer, J., Hamrud, M., Hollingsworth, A., Simmons, A., and Wedi, N., 1998: Impact of model resolution and ensemble size on the performance of an ensemble prediction system. Q. J. R. Meteorol. Soc., 124, 1935-1960.

Burpee, R. W., J. L. Franklin, S. J. Lord, R. E. Tuleya, and S. D. Aberson, 1996: The impact of Omega dropwindsondes on operational hurricane track forecast models. Bulletin of the American Meteorological Society, 77, 925-933.

Derber, J., H.-L. Pan, J. Alpert, C. Caplan, G. White, M. Iredell, Y.-T. Hou, K. Campana and S. Moorthi, 1998: Changes to the 1998 NCEP Operational MRF model Analysis-Forecast system. NOAA/NWS Tech. Procedure Bull. 449, 16 pp. [Available from Office of Meteorology, National Weather Service, 1325 East-West Highway, Silver Spring, MD 20910.]

Gelaro, R., R. H. Langland, G. D. Rohaly, T. E. Rossmond, 1999: An assessment of the singular-vector approach to target observations using the FASTEX dataset. Quart, J. Roy. Meteor. Soc., 125, 3299-3328.

Gregg, W. R., 1920: Aerological observations in the West Indies. Mon. Wea. Rev., 48, 264.

Franklin, J. L., S. E. Feuer, J. Kaplan, and S. D. Aberson, 1996: Tropical cyclone motion and surrounding flow relationships: Searching for beta gyres in Omega dropwindsonde datasets. Mon. Wea. Rev, 124, 64-84.

Joly, A., K. A. Browning, P. Bessemoulin, J.-P. Cammas, G. Caniaux, J.-P. Chalon, S. A. Clough, R. Dirks, K. A. Emanuel, L. Eymard, F. Lalaurette, R. Gall, T. D. Hewson, P. H. Hildebrand, D. Jorgensen, R. H. Langland, Y. Lemaitre, P. Mascart, J. A. Moore, P. O. G. Persson, F. Roux, M. A. Shapiro, C. Snyder, Z. Toth, R. M. Wakimoto, 1999: Overview of the field phase of the Fronts and Atlantic Storm-Track Experiment (FASTEX) project. QJRMS, 125, in print.

Joly, A., D. Jorgensen, M.A. Shapiro, A. Thorpe, P. Bessemoulin, K.A. Browning, J.-P. Cammas, J.-P. Chalon, S.A. Clough, K.A. Emanuel, L. Eymard, R. Gall, P.H. Hildebrand, R.H. Langland, Y. Lemaitre, P. Lynch, J.A. Moore, P.O.G. Persson, C. Snyder, and R.M. Wakimoto, 1997: The Fronts and Atlantic Storm-Track Experiment (FASTEX): Scientific Objectives and Experimental Design. Bull. Amer. Meteor. Soc., 78, 1917-1940.

Kistler, R., E. Kalnay, W. Collins, S. Saha, G. White, J. Woollen, M. Chelliah, W. Ebisuzaki, M. Kanamitsu, V. Kousky, H. van den Dool, R. Jenne, and M. Fiorino, 2001: The NCEP/NCAR 50-year Reanalysis: Monthly-means CD-ROM and Documentation. Bull.Amer. Meteor. Soc., in press.

Langland, R. H., and G. D. Rohaly, 1996: Analysis error and adjoint sensitivity in prediction of a North Atlantic frontal cyclone. Preprints, 11th Conf. onNumerical Weather Prediction. 19-23 August 1996, Norfolk, Va., Amer. Meteor.Soc., pp. 150-152.

Langland, R. H., Z. Toth, R. Gelaro, I. Szunyogh, M. A. Shapiro, S. Majumdar, R. Morss, G. D. Rohaly, C. Velden, N. Bond, and C. Bishop, 1999: The North-Pacific Experiment (NORPEX-98) Targeted observations for improved North merican Weather Forecasts. Bull. Amer. Meteorol. Soc., 80, 1363-1384.

Lorenz, E. N., 1963: Deterministic non-periodic flow. J. Atmos. Sci., 20, 130-141.

Lorenz, E. N., 1969: The predictability of a flow which possesses many scales of motion. Tellus, 21, 289-307.

Lorenz, E. N., and K. A. Emanuel, 1998: Optimal sites for supplementary weather observations: simulations with a small model. J. Atmos. Sci., 55, 633-653.

Morss, R. E., K. A. Emanuel, and C. Snyder, 2000: Idealized adaptive observation strategies for improving numerical weather prediction. JAS, in press.

Pu, Z-X., E. Kalnay, J. Sela, I. Szunyogh, 1997: Sensitvity of forecast error toinitial conditions with a quasi-inverse linear method. Mon. Wea. Rev., 125, 2479-2503.

Rabier, F., H. Jarvinen, E. Klinker, J.-F. Mahfouf and A. Simmons, 2000: The ECMWF operational implementation of four-dimensional variational assimilation. Part I: experimental results with simplified physics. Quart. J. Roy. Met. Soc., 126, 1143--1170.

Ralph, F. M., O. Persson, D. Reynolds, P. Neiman, W. Nuss, J. Schmidt, D. Jorgensen, C. King, A, White, J. Bao, W. Neff, D. Kinsmill, D. Miller, Z. Toth, and J. Wilczak, 1998: The use of tropospheric profiling in CALJET. 4th Symp. Tropospheric Profiling: Needs and Technol. 20-25 Sept., Snowmass, CO, p. 258-260.

Snyder, C., 1996: Summary of an Informal Workshop on Adaptive Observations and FASTEX. Bull. Amer. Meteorol. Soc., 77, 953--961.

Szunyogh, I., Z. Toth, K. A. Emanuel, C. H. Bishop, C. Snyder, R. E. Morss, J. Woolen, and T. Marchok, 1999a: Ensemble-based targeting experiments during FASTEX: the effect of dropsonde data from the Lear jet. Quart. J. Roy. Meteor. Soc., 125, 3189-3218.

Szunyogh, I., Z. Toth, S. Majumdar, R. Morss, C. Bishop, and S. Lord, 1999b: Ensemble-based targeted observations during NORPEX. Preprints of the 3rd Symposium on Integrated Observing Systems,10-15 January 1999, Dallas, Texas, 74-77.

Szunyogh, I., Z. Toth, S. Majumdar, R. Morss, B. Etherton, and C. Bishop, 2000: The effect of targeted dropsonde observations during the 1999 Winter Storm Reconnaissance program. Mon. Wea. Rev., 128, 3520-3537.

Szunyogh, I. , Z. Toth, and S. Majumdar, 2001: On the propagation of the effect of targeted observations: The 2000 Winter Storm Reconnaissance Program. Tellus, to be submitted.

Toth, Z., and E. Kalnay, 1997: Ensemble forecasting at NCEP and the breeding method. Mon. Wea. Rev, 125, 3297-3319.

Toth, Z., I. Szunyogh, K. Emanuel, C Snyder, J. Woolen, W.-S. Wu, T. Marchok, R. Morss, and C. Bishop, 1998: Ensemble-based targeted observations during FASTEX. Preprints of the 12th Conference on Numerical Weather Prediction, 11-16 January 1998, Phoenix, Arizona, p. 24-27.

Toth, Z., I. Szunyogh, S. Majumdar, R. Morss, B. Etherton, C. Bishop, and S. Lord, 1999: The 1999 Winter Storm Reconnaissance Program. Preprints for the 13th AMS Conference on Numerical Weather Prediction, 13-17 September 1999, Denver, CO, 27-32.

Toth, Z., I. Szunyogh, S. Majumdar, R. Morss, B. Etherton, C. Bishop, and S. Lord, 2000: Targeted observations at NCEP: Toward an operational impolementation. Preprints for the Fourth Symposium on Integrated Observing Systems, January, 10-14, 2000, Long Beach, CA, Dallas, TX, AMS, 186-193.

1 GSC (Beltsville, MD) at NCEP. Corresponding author address: Z. Toth, NCEP/EMC, 5200 Auth Rd., Room 207, Camp Springs, MD 20746.

2 UCAR Visiting Scientist at EMC/NCEP; 3.Pennsylvania Sate University, State College, PA; 4. NCAR, Boulder, CO