Zoltan Toth (SAIC at NCEP) and Stephane Vannitsem (Royal Meteorological Institute of Belgium)


            Scientific and computational limitations prevent us from constructing a perfect numerical model of real systems. In a chaotic system like the atmosphere, model related errors, just like errors in the initial conditions, contribute to the eventual loss of predictive skill. The spatial and temporal variations in skill are of great importance to users of weather forecasts. Ensemble forecasting, where not only one but a number of numerical integrations are carried out, was first introduced to assess initial error related variations in predictability. Model related errors, however, also contribute to variations in skill. Such variations can only be captured if all known model related uncertainty is simulated in the forecasts at their origin.

            Based on earlier methods, a comprehensive approach is proposed here to capture forecast uncertainty associated with the use of imperfect models, including that due to numerical, physical parameterization, and boundary condition related approximations. Each component and parameter of a model needs to be examined and possibly modified to ensure that all closure (due to the effect of unresolved processes) and other type of uncertainty is properly simulated. With these new requirements Numerical Weather Prediction (NWP) models are expected to exhibit more realistic spatio-temporal variance, with a lower level of predictability. This can lead to a degradation of skill when only single control forecasts are considered. Ensemble-based probabilistic forecasts, however, are expected to improve since not only  initial value, but also model related variations in predictability will be captured. Ensembles with a perturbed model can also lead to a new, adaptive approach in NWP forecasting where the structure/parameters of a model can be initialized based on a systematic evaluation of the most recent short range ensemble forecasts.


1            INTRODUCTION

            It has long been recognized that predictability in the atmosphere, as in other chaotic systems, is limited (see, e. g., Lorenz 1969). Any error in the initial condition or in the model used to make a prediction will eventually lead to a loss of predictability in a sense that no information will be retained in the forecast regarding the initial conditions. The loss of predictability, however, is not uniform in space and time. Ensemble forecasting, where a set of Numerical Weather Prediction (NWP) model forecasts are produced, instead of a single “control” forecast, has been suggested as a way to quantify variations in predictability (Leith 1974; see also Toth et al. 2001, and references therein).

            Initially, ensemble forecasting was introduced at operational NWP forecast centers to account for initial condition related errors (European Centre for Medium Range Weather Forecasts, ECMWF, Molteni et al. 1996; National Centers for Environmental Prediction, NCEP, Toth and Kalnay 1993). The practice of ensemble forecasting has been widely accepted since, leading to operational implementation at numerous NWP centers (FNMOC, Rennick 1995; CMC, Houtekamer et al. 1996; JMA, Kyouda 2002; SAWB, Tennant, 2000; KMA 2001).

            Through a wealth of verification studies it became clear that ensemble forecasting, by providing flow dependent probability distributions of weather elements, enhances predictability (see, e. g., Ehrendorfer 1997). As several studies indicate, the potential economic value of weather forecasts can be substantially increased through the use of ensemble forecasts as compared to the use of single control forecasts only (Richardson 2000; Mylne et al. 1999; Zhu et al. 2002). This is achieved by providing a better estimate of the expected value of weather elements (ensemble mean), and by providing case dependent probability distributions (spread and higher moments). In particular, by tracking how uncertainty in the initial conditions dynamically evolves in time and space through the forecast process, an ensemble of forecasts can distinguish between situations with unusually high or low predictability (Toth et al. 2001). 

            Unless model related uncertainty is properly simulated, ensembles will not be able to capture variations in forecast uncertainty arising due to model imperfections. Similarly, such ensembles will be deficient in a sense that in cases model error is present they will not always be able to capture, in a probabilistic sense, the true evolution of the system that is predicted. This is reflected, for example, by a difference between the time evolution of the error in the ensemble mean forecast and the spread of ensemble members around this mean. In an ideal ensemble, these two quantities should statistically be identical. This is clearly not the case, for example, for the NCEP ensemble (see, e. g., Fig. 2 of Toth et al. 1998). The difference between the error and spread growth rates reflects the lack of representation of model related errors in the NCEP ensemble system.

            The next section is devoted to a short description of methods developed earlier for simulating model related errors. A comprehensive approach, based on the existing methods, for representing model related uncertainty in ensemble forecasting, along with suggested ways to estimate various aspects of the uncertainty, is introduced in section 3. A brief summary and further motivation for the use of the method is offered in section 4.


2.             EXISTING METHODS

            Though operational ensemble forecasts first focused on assessing initial value related uncertainty only, several attempts have been made since to introduce procedures to account for errors related to model imperfections. In one of the first such studies, Toth and Kalnay (1995) attempted to capture model related uncertainty by periodically increasing the size of nonlinear perturbations through the rescaling of difference fields between the control and perturbed ensemble forecasts. They assumed that with increasing lead time model related errors would, just as initial errors, get aligned with dynamically fast growing perturbations: the Lyapunov Vectors (LVs, Vannitsem and Nicolis 1994; Trevisan and Pancotti 1998) in a linear setting, or the bred vectors (Bvs, Toth and Kalnay 1993; 1997) in a nonlinear setting. They postulated that such errors, to some extent, can be captured by the difference fields developing among nonlinear ensemble forecasts. Interestingly, Johnson et al. (2000) found  that errors due to numerical inconsistency and truncation tend to appear in dynamically active baroclinic zones of the atmosphere (such as the jet regions of the extratropics), where nonlinear perturbation growth (and hence the perturbation enhancement suggested by Toth and Kalnay 1995) also tend to concentrate.

            In another attempt to capture model related uncertainty in ensemble forecasting, Houtekamer et al. (1996) used a distinctly different version of an NWP model for integrating each member of an ensemble (in addition to initial value perturbations generated by running independent analysis cycles). Various aspects of the model, such as convective parametrization, the treatment of orography, etc., have been changed to arrive at model versions that overall produced comparably skillful forecasts. Interestingly, this approach led to an increase in the size of initial perturbations (as opposed to using the same model version), but no apparent change in the rate at which perturbations diverge (see Table 4 of Houtekamer et al. 1996). The use of different model versions may lead to differences in terms of the systematic behavior of the model (different systematic biases). However, the use of different model versions does not necessarily create ensemble members whose perturbation growth matches that exhibited by forecast error fields. Houtekamer et al. (1996) found the effect of the model perturbations beneficial in terms of increased ensemble variance and the method was operationally implemented into the Canadian Meteorological Centre (CMC) ensemble forecast system.

            Buizza et al. (1999) introduced yet another technique to represent model errors, in this case  related to the net effect of physical parametrization schemes in an NWP model. Buizza et al.(1999) recognized the uncertainty related to the parametrization of sub-grid scale processes in NWP models. To represent this uncertainty in ensemble forecasting, they proposed to introduce random perturbations by multiplying the diabatic forcing computed as the sum of all parameterized processes by a number that varies randomly around 1 with an experimentally chosen variance, and time and spatial coherence structure. The model perturbations led to an increase in the size of ensemble perturbations (albeit not sufficient to match forecast error growth) and were introduced into the operational ECMWF ensemble prediction system.

            In his method called “cross-pollination”, Smith (2000) modified the method used by Houtekamer et al. (1996). Smith (2000) recognized that the model version that best simulates reality may vary in time. To give a chance to the ensemble to capture the real evolution of the atmosphere, he suggested that at a certain time frequency the various models used in the Houtekamer et al. (1996) approach be randomly switched among the ensemble members. This is achieved by integrating an N-member perturbed initial value ensemble M times, using M different model versions. After a certain time period the NxM forecasts are compared and only those N members that differ most from the others are maintained, to be integrated again by the M different models.



            Each of the four methods discussed above addresses an important, yet limited aspect of model related uncertainty in ensemble forecasting. Model error is a multifaceted problem, and for that reason none of the existing methods can on its own fully account for it. Here we propose to further develop and combine various aspects of the existing methods to offer a comprehensive approach to treating model related uncertainty in NWP ensemble forecasting.


3.1            MOTIVATION

            The basic premise behind the proposed approach is that all sources of uncertainty in NWP modeling, including model related errors, need to be accounted for at their origin. This approach has been proved to be valid by a vast array of studies regarding initial value related uncertainty (see, e. g., Ehrendorfer 1997). It is generally accepted that initial ensemble perturbations must reflect the estimated uncertainty in the initial state of the atmosphere. This ensures that, as the perturbations evolve through the dynamics of the model in space and time, they will realistically represent forecast uncertainty related to the initial errors.

            As argued by Toth and Kalnay (1995), and shown in a simple model framework by Vannitsem and Toth (2002), errors introduced at each time step of the integration by the use of imperfect models are acted upon by the same unstable dynamics that affect and amplify initial errors. Vannitsem and Toth (2002) showed that the structure (e. g., temporal variations) of model errors can also have a large effect on their evolution. Therefore if one is to realistically simulate the effect of model errors in an ensemble of forecasts, sources of model related uncertainty need to be accounted for, just as in case of initial errors, at their origin.

            The application of this principle entails the introduction of a new approach to NWP modeling. In the pre-ensemble era of NWP, models were evaluated based on a single forecast started from the best initial state (control analysis and forecast). Necessarily, development efforts, and the evaluation of NWP forecasts focused on minimizing the error in a single forecast of the future state of the atmosphere (expected value, first moment). As discussed in the Introduction, ensembles enhance the value of weather forecasts by providing case dependent uncertainty estimates (second and higher moments of the forecast probability distribution). Some advanced users of weather forecasts such as energy companies, water management agencies, emergency managers, etc., actually demand such information (Zhu et al. 2002). Therefore in model development and associated verification studies higher order forecast statistics should be properly considered through the use of ensemble experiments (in place of single forecast tests). And in these efforts the utility of ensemble forecasts can be further enhanced by  representing  model related uncertainty in every aspect of NWP models as realistically as possible.


3.2            METHODOLOGY

            Due to scientific and computational limitations numerical models provide only an inaccurate representation of reality. The choice and use of each model component and parameter, and the structure of the model as a whole may contain approximations, thus introducing associated errors. Which of these approximations lead to relatively minor, and which to large forecast error may be case dependent. Only with a systematic effort aimed at representing model related forecast uncertainty at its origin can such variations be possibly captured. Three major sources of model related uncertainty, associated with numerical representation, parameterization schemes, and boundary conditions will be briefly discussed below.


            Numerical uncertainties. An important class of model related forecast errors arise due to the unavoidable use of finite numerical resolution and representation. The choice and use of a particular horizontal and vertical spatial (grid-point or other type of) representation, vertical coordinate system, model variables, numerical scheme, etc., introduces errors that need to be represented in ensemble NWP model integrations.

            Of particular interest among the different types of numerical uncertainties is that related to the use of models with finite spatial resolution. The spatial truncation related errors are introduced at each time step of an integration. The somewhat related effect of unresolved physical processes is discussed under the next heading. As for the dynamics of the circulation, the effect of sub-grid scale processes on the resolved scales are represented in terms of diffusion applied on the smallest resolved scales. Note that such a scheme represents only one side of a two-way interaction that exists in nature between the smallest resolved and the largest unresolved scales. In such schemes the stochastic effect of the unresolved scales on the resolved scales, that contributes to the loss of predictability, is ignored. Such a one-way connection may lead to more “accurate” forecasts when only a single integration is concerned. The resulting lower than desired ensemble variance, however, will lead to sub-optimal ensemble mean and probabilistic forecast performance.

            The introduction of stochastic perturbations at each time step of the model integration, representing the random effect of the sub-grid scale dynamical (and physical) processes on the larger scale circulation can ensure that truncation related uncertainty is accounted for in ensemble forecasting. The presumably small temporal, spatial, and cross-variable correlation structure of such noise should be based on the estimated structure of the relevant sub-grid scale processes.


            Parameterization uncertainties. Beyond the dynamical processes discussed above, important physical processes such as boundary layer mixing, radiation, water phase changes, and convection are also taking place on the small, unresolved scales. The effect of these and other physical processes on the resolved scale circulation is usually parameterized as a deterministic function of the large-scale circulation. By ignoring the stochastic effect of the sub-grid scale physical processes on the resolved scales, most of these schemes, just as diffusion discussed above, represent only one part of an interaction that is two-way in nature, potentially leading to insufficient ensemble variability. Just as truncation error, this is a closure type uncertainty that can be represented, as discussed above, by the introduction of stochastic noise with a short time and spatial correlation structure.

            The construction and use of parameterization schemes is associated with additional sources of uncertainty. Most parameterization schemes, for example, have at least one parameter value that needs to be set experimentally. Usually, after some theoretical considerations and/or preliminary experiments, parameter values are “optimized” through the verification of a large number of forecast experiments in which different parameter values are tested. The parameter value that leads to the best overall performance in a particular model configuration is selected for future use. While such an approach may lead to the best overall skill score when single forecasts are concerned, it will not represent model related uncertainty and therefore may not lead to optimal ensemble performance.

            It is proposed here that the parameter value to be used in ensemble NWP model integrations be chosen randomly from the range of parameter values that perform best in a large number of forecast experiments using different parameter values. The frequency at which the random values are drawn from the estimated range should correspond to how often each value performed best in the forecast test of the different parameter values described above. Similarly, the values in an integration should change as a function of space and time according to an estimate of the spatial and temporal error correlation structure of the parameter in question. These variations can be estimated by evaluating how rapidly the best performing parameter value changes in time and space in the short-range forecast experiments described above.

            NWP model developments led to the emergence of a variety of choices for most types of parameterization (and other) schemes. As an example, let us consider parameterization schemes representing convective processes that at a resolution coarser than a few kilometers cannot be explicitly handled in a model. Each of the schemes (developed by Kuo, Arakawa, Grell, Moorthi, Kane and Fritch, and others, see, e. g., Emmanuel 1994) is based on somewhat different assumptions and use different approximations. Interestingly, the different schemes can lead to rather different precipitation patterns (see, e. g., Gallus 1999). And while there may be differences in the overall level of skill these schemes provide, generally any of the schemes can perform best under certain flow configurations (Gallus and Segal 2001; Jankov and Gallus 2002).

            Both from a scientific and from a practical point of view it would be desirable to consolidate all relevant knowledge into a single scheme that would then be able to reproduce the effect of any of the existing schemes. The construction of such a scheme may not be feasible either because a given parameterization problem is not unique or well posed, or due to the lack of scientific knowledge, or due to practical constraints. If that is the case it is desirable to use the various schemes in parallel and apply a random linear combination of the individual schemes’ output as the feedback of parameterized processes onto the resolved scales of motion. Since the best performing schemes are observed to change in time from case to case (Gallus and Segal 2001), it is desirable to simulate these changes in a model designed to capture model related uncertainty. Therefore it is suggested that the time and spatial scales of the random changes in linear weights be based on an analysis of how fast the best performing schemes vary in an ensemble of short-range forecasts.


            Boundary related uncertainties. Another type of model error arises due to the unavoidable use of finite model domains. While processes over the selected domain of a model are explicitly resolved at the model’s resolution, boundary conditions for atmospheric forecasts are typically specified in a deterministic fashion. For example, sea surface temperature and roughness conditions in uncoupled weather (MRF 2001) and climate (Ji et al. 1994; Livezey et al. 1996) atmospheric general circulation model forecasts, and lateral boundary conditions in Limited Area Model (LAM) forecasts (Black et al. 1999) are usually specified as the best estimate of such conditions, based on either uncoupled dynamical or statistical forecasts of the surrounding areas. Such a specification of the boundary conditions can, in a general sense, be considered as a closure scheme in which the two-way interactions occurring in nature between processes within and outside the selected domain are ignored, and replaced by a prescribed forcing on the circulation within the resolved domain. As in the cases where truncation and parameterization related feedback from the small scales was ignored, this can lead to under-dispersive ensemble forecasts and spurious estimates of predictability (for an example, see, e. g., Pena et al. 2002).

            To avoid sub-optimal ensemble performance it is suggested that the uncertainty in the specification of all forecast boundary forcing values be explicitly included in NWP integrations. Boundary forcing for different ensemble members should vary in time and space according to the estimated uncertainty and its spatio-temporal correlation structure. In case of LAM ensemble forecasting (Du and Tracton 1999; Nutter 2001) and coupled ocean-atmosphere forecasting (Balmaseda, personal communication 2002) the adoption of this approach leads to a substantial increase in ensemble spread.


            Practical considerations. The examples discussed above were used only to illustrate numerical, parameterization, and boundary related model errors. For a successful depiction of model related forecast uncertainty, all aspects and components of NWP models need to be similarly analyzed with the aim of ascertaining uncertain elements, and developing ways to explicitly represent them in ensemble forecast applications.

            The construction or modification of a model as described above may be a difficult and laborious task. Not all model components and parameters, however, may have an important contribution to the description of case dependent model related errors. Perturbations to a few key components/parameters, accompanied by appropriate closure related perturbations, may achieve much of what is practically possible. In the short term, these critical model components/parameters need to be identified and modified so they can explicitly simulate model related forecast uncertainty. In the long term, however, models should be built to simulate model related uncertainty in as many components/parameters as possible and practical.

            Lacking the ability to perfectly model the atmosphere due to scientific, computational, or practical limitations will likely prevent us from perfectly representing all model related uncertainty as well. Even if numerical, parameterization, and boundary type uncertainties are accounted for as well as they can be, some model related uncertainty may still remain unexplained. For lack of a better approach, this residual uncertainty can be represented by increasing the level of closure related stochastic perturbations discussed above in connection with truncation related uncertainty. This approach is based on the assumption that all yet unexplained uncertainty is associated with sub-grid scale processes that appear random with respect to the resolved scales, and can therefore be simulated by stochastic noise. The magnitude of these residual perturbations should be set to ensure that an ensemble of forecasts that properly accounts for all known sources of forecast error (both in initial value and in model structure, including numerical, parameterization, and boundary related terms) would have a spread around its mean that is approximately equal to the error in the ensemble mean forecast. This is a necessary condition for having a perfect ensemble where the verifying analysis, in a statistical sense, is indistinguishable from the ensemble members (Buizza 1997).



            The method proposed here to account for model errors caused by the effect of the unresolved scales (closure or truncation type uncertainty) is related to that of Toth and Kalnay (1995). The introduction of weakly correlated random noise at each integration time step, in place of dynamically evolved perturbations in their approach, may be more realistic, however, since it can better simulate how closure related random errors project in time on dynamically fast growing perturbation directions.

            The method suggested here for representing closure type uncertainty is also related to parameterization schemes in which certain choices are made in a random fashion. In the convective parameterization scheme of Moorthi (2000), for example, the cloud types used at each time step are chosen from a randomly selected subset of all possible types. This is done with the same intention as that of the introduction of closure type perturbations proposed above, to represent the effect of sub-grid scale processes which can be considered stochastic as far as the resolved scales are concerned.

            The method proposed here to account for parametric uncertainty is related to that of Buizza et al. (1999). Instead of treating all uncertainty related to parameterized diabatic processes altogether, the proposed approach traces various aspects of parametric uncertainty to their root. This allows for more realism in representing parametric model uncertainty that is expected to lead to improved ensemble forecast performance. For example, realistic and large variations in a parameter may lead to large changes in diabatic forcing in some, while only minor changes in other cases.

            The practical approach proposed here to represent uncertainty related to the choice of physical parameterization (or other) components, where a number of schemes are run in parallel, builds on the method of Houtekamer et al. (1996). While Houtekamer et al. (1996) create a number of independent versions of the model for their ensemble integrations, the method proposed here uses different parameterization schemes within a single model version that is used for all ensemble integrations.. The proposed method is also related to the cross-pollination method of Smith (2000). While Smith et al. (2000) switches the different model versions of Houtekamer et al. (1996) among the different ensemble members periodically, the proposed smoothly changing random changes in the linear combination of parameterized feedback values computed for each of the schemes that are run in parallel avoids sudden and unrealistic shocks that can lead to imbalance.



            One of the scientific objectives of the proposed comprehensive approach to simulating model errors in NWP ensemble forecasting is to enable a model to generate diverse forecasts that currently can only be achieved, if possible at all, through the use of different models and/or model components.  The integration of our knowledge, currently fragmented among various NWP models and model components, into a single framework is a scientific challenge that can contribute to advances in NWP modeling.

            While the proposed additions and changes in NWP modeling are designed to satisfy the needs of ensemble forecast applications, they also offer a potentially important feedback to NWP modeling efforts in general. The outlined approach not only allows for a careful analysis of different types of model errors but can also provide guidance as to the proper choice of model components and parameters. In an adaptive NWP modeling setup, the short-range performance of recent forecasts can be evaluated by assessing the best member in an ensemble in which model components and parameters are being varied. If the ensemble is large enough to filter out the effect of the different initial perturbations, this verification information can potentially be used to “initialize” the model for the next forecast cycle. A “currently optimal”, modified distribution of parameters and/or parameterization schemes can be established from which values/schemes are selected initially, before the distributions at longer lead tims revert to their climatologically established ranges.

            The inclusion of numerical, parameterization, and boundary type perturbations as described above is expected to lead to more realistic model behavior. As pointed out earlier, reality “changes” in a sense that its behavior can be best described by different model versions over different periods of time. Introducing such changes into the model itself is expected to lead to more realism, including the climatological variance of a model (Palmer 2001). One should note, however, that the inclusion of model perturbations into a single control model integration is expected to lead to an inferior estimate of the first moment of the forecast probability distribution. The skill degradation, in some sense, is similar to that experienced when the initial condition is perturbed to represent initial value related uncertainty. The value added by the use of a more realistic model (in this case, containing model perturbations) becomes visible only when an ensemble of forecasts is considered, in terms of improved reliability (statistical consistency) and sharpness (or resolution, i. e., an ability to distinguish between high and low uncertainty situations) of the resulting probabilistic forecasts. This has been demonstrated by Toth et al. (2002) who found that control/ensemble forecasts that were degraded by the introduction of a more severe horizontal truncation at 3.5 days lead time exhibited improved/degraded forecast performance.              

            In light of the mounting evidence indicating the added value ensembles provide in weather forecasting, the goal of NWP model development must evolve. NWP model improvements should no more aim at generating a single control forecast as the best estimate of the expected value of atmospheric variables - that is demonstrably best achieved by taking the mean of an ensemble of forecasts. Rather, NWP improvements should aim at providing sharp and reliable probabilistic forecasts. The changed focus requires new methods (use of ensembles vs. single forecasts), and associated verification tools (probabilistic measures vs. scores developed for single value forecasts). In such a more general, ensemble forecasting framework an NWP model is not complete unless it is able to explicitly simulate all known model related uncertainty.



            In a chaotic system like the atmosphere, where predictability is lost in a flow dependent manner, the provision of a single control forecast is of limited value. Ensemble forecasting, where multiple forecasts are generated through initial and model perturbations, can be used to assess variations in predictability. Error evolution, and related perturbation techniques associated with initial value uncertainty have been extensively studied. Model related errors, however, have not been studied as thoroughly. And while the evolution of initial value related perturbation techniques does show some convergence, no universally accepted method exists for accounting for model related uncertainty in ensemble forecasting.

            Based on earlier methods, this paper outlines a comprehensive approach for simulating all known aspects of model related uncertainty in NWP ensemble forecasting, associated with the use of a particular imperfect model. Model related errors are classified according to numerical, physical parameterization, and boundary condition type uncertainty. As for the uncertainty associated with the use of a finite numerical resolution, it was suggested that at each time step random perturbations with an appropriate spatio-temporal correlation structure be introduced for representing the stochastic effect of sub-grid scale dynamical and physical processes This is to represent a stochastic feedback of the small scales onto the resolved scales that is generally missing in current diffusion and physical parameterization schemes. The lack of a two-way interaction in current NWP models that is present in nature between the explicitly resolved and unresolved processes is a likely cause of insufficient spread in ensemble forecasting.

            It was emphasized that the use of finite model domains, where the forecast boundary conditions need to be specified, also increases forecast errors. Again, a lack of two-way interaction in a model, here between processes inside and outside the model domain, leads to reduced ensemble variability and sub-optimal ensemble performance. As in the case of the other closure type errors related to the use of finite resolution, the problem can be ameliorated by simulating boundary type errors with an appropriate spatio-temporal correlation structure.

            Beyond the closure type errors related to numerical, parameterization, and boundary type uncertainties, the choice and use of a particular physical parameterization (or other) scheme also introduces additional errors. For the simulation of such errors in an ensemble forecasting environment it was suggested that knowledge fragmented among existing physical parameterization schemes such as the numerous convective schemes be integrated into a single scheme that would then be able to reproduce forecasts generated through the use of any of the existing schemes. Recognizing how difficult this task may be in practice, an alternative was suggested where the existing schemes are used in a model in parallel and their feedback values are linearly combined in a random fashion.

            For simulating uncertainty related to the choice of a particular parameter value within a scheme it was suggested that parameter values be randomly varied in space and time within a range of realistic values. The climatological range of parameter values, as well as their spatio-temporal correlation can be established based on verification statistics of short-range ensemble forecasts generated with different parameter values.

                        The simulation of model related errors, required to fully realize the promise of ensemble forecasting, is a relatively new concept in NWP modeling. Traditionally, NWP models have been constructed to produce, with a single forecast integration, as accurate an atmospheric forecast as possible. To achieve this goal, single forecasts were started from the best available initial conditions. In addition, closure schemes were developed and used that ignored the effect of the feedback of the unresolved scales and domains of motion on the resolved circulation. Models with such closure schemes possess an increased and unrealistically high level of predictability (see Toth 1991) that in turn can lead to, as Toth et al. (2002) showed, an increase in single control forecast skill. In this traditional approach to NWP modeling second (and higher) order forecast statistics (spatio-temporal variance) have not been studied carefully and may have been compromised in sake of improved single forecast performance.

Once model related uncertainty is properly simulated in an NWP model for ensemble applications, forecasts are expected to become more realistic in terms of their spatio-temporal variance. The simulation of two-way interactions between the resolved and unresolved scales and domains of motion, lost when using traditional closure schemes, is also expected to lead to models with more realistic and reduced limits of predictability. The use of a more realistic model, ironically, may in turn lead to less accurate single control forecasts, as demonstrated by Toth et al. (2002). Just as in case of initial value related perturbations, the performance of ensemble forecasts, however, is expected to improve since the ensemble will be able to distinguish between more or less predictable situations depending not only on initial value but also on model related forecast diversity. It follows that certain NWP model improvements are possible, and can be evaluated only in an ensemble forecast environment

            Ensemble forecasting has a mutually beneficial, two-way interaction with data assimilation. Initial value related short range forecast uncertainty, that is of great importance in data assimilation, is best characterized by an ensemble, while the ensemble can be best initialized with information on analysis error characteristics revealed by a data assimilation scheme. The proposed new approach to representing model related uncertainty highlights a link between ensemble forecasting and NWP modeling. As argued throughout this paper, the proposed simulation of different types of model related uncertainty can substantially enhance ensemble forecasting. At the same time the resulting ensemble forecasts can have a positive feedback on NWP model developments. The evaluation of ensemble forecasts representing model related uncertainty can possibly facilitate a new, adaptive approach to the selection of model components/parameters, based on the most recent verification statistics. A synergistic interaction between NWP model development and ensemble forecasting may one day be as rewarding as that currently developing between data assimilation and ensemble forecasting.


ACKNOWLEDGEMENTS. The authors benefited from discussions with a number of colleagues, including Joe Tribbia (NCAR), Tim Palmer and Roberto Buizza (ECMWF), Peter Houtekamer (CMC), Mike Fiorino (LLNL), William Gallus (Iowa State University), Shrinivas Moorthi, Hua-Lu Pan, and Stephen Lord (NCEP), and Prashant Sardeshmukh (CDC).



Black, T. L., G. J. DiMego, and F. Mesinger, 1999: A test of the ETA lateral boundary conditions scheme. In: Research Activities in Atmospheric and Oceanic Modelling, Ed. H. Ritchie. CAS/JSC Working Group on Numerical Experimentation, Report No. 28, WMO/TD-No. 942, p. 5.9-10.


Buizza, R., 1997: Potential forecast skill of ensemble prediction and spread and skill distributions of the ECMWF ensemble prediction system. Mon. Wea. Rev.,  125, 99-119.


Buizza, R., M. Miller, and T. N. Palmer, 1999: Stochastic simulation of model uncertainty in the ECMWF ensemble prediction system.  Q. J. R. Meteorol. Soc., 125, 2887-2908.


Du, J., and M. S. Tracton, 1999: Impact of lateral boundary conditions on regional-model ensemble prediction.  In Research activities in atmospheric and oceanic modelling (edited by H. Ritchie), Report 28, CAS/JSC Working Group Numerical Experimentation (WGNE), WMO/TD No. 942, 6.7-6.8.


Ehrendorfer, M., 1997:  Predicting the uncertainty of numerical weather forecasts: a review.  Meteorol. Zeitschrift, 6, 147-183.


Emanuel, K.A., 1994: Atmospheric Convection.  Oxford University Press, New York, 580 pp.


Gallus, W. A., Jr., 1999:  Eta simulations of three extreme precipitation events: Impact of resolution and choice of convective parameterization.  Wea. and Forecasting, 14, 405-426.


Gallus, W. A., Jr., and M. Segal, 2001:  Impact of improved initialization of mesoscale features on convective system rainfall in 10 km Eta simulations.  Wea. Forecasting,  16} 680-696.


Jankov, I. and W. A. Gallus, Jr., 2002: Contrasts between good and bad forecasts of warm season MCSs in 10 km Eta simulations using two convective schemes.  Preprints, 15th Conf. on Numerical Weather Prediction, Aug. 12-16, San Antonio, TX (in press).


Ji, M., A. Kumar, and A. Leetmaa, 1994: An experimental coupled forecast system at the National Meteorological Center: Some early results. Tellus, 46A, 398- 418.


Johnson, D. R., A. J. Lenzen, T. H. Zapotocny, and T. K. Schaack, 2000:  Numerical uncertainties in the simulation of reversible isentropic processes and entropy conservation.  J. Climate, 13, 3860-3884.


Houtekamer, P. L., L. Lefaivre, J. Derome, H. Ritchie, and H. L. Mitchell, 1996: A system simulation approach to ensemble prediction.  Mon. Wea. Rev., Mon. Wea. Rev., 124, 1225-1242.


KMA, 2001: Numerical Weather Prediction. Korea Meteorological Administration, Newsletter, Vol. 2, No. 1.


Kyouda, M., 2002: Ensemble prediction system. In: Outline of the operational NWP at the JMA. Japan Meteorological Agency, Ed.: T. Fujita, p. 59-63. [Available from JMA]


Leith, C. E., 1974: Theoretical skill of Monte-Carlo forecasts.  Mon. Wea. Rev.,  102, 409-418.


Livezey, R. E., M. Masutani and M. Ji, 1996: SST-forced seasonal simulation and prediction skill for versions of the NCEP/MRF model. Bull. Amer. Metor. Soc., 77, 507-517.


Lorenz, E. N., 1969: The predictability of a flow which possesses many scales of motion. Tellus, 21, 289-307.


Molteni, F., R. Buizza, T. N. Palmer, and T. Petroliagis, 1996: The ECMWF ensemble prediction system: methodology and validation. Quart. J. Roy. Meteor. Soc., 122, 73-119.


Moorthi, S., 2000: Application of Relaxed Arakawa-Schubert Cumulus parameterization to the NCEP Climate Model: Some sensitivity experiments. General Circulation Model Development, Past Present and Future, Ed. David Randall, Publisher Academic Press, International Geophysics Series, vol 70, 257-284.


MRF, 2001: The Medium-Range Forecast Model. Online documentation highlights of the NCEP MRF numerical weather prediction model. Available at:


Mylne, K.R., 1999  The use of forecast value calculations for optimal decision making using probability forecasts. Preprints of the 17th AMS Conference on Weather Analysis and Forecasting, 13-17 September 1999, Denver, Colorado, 235-239.


Nutter, P. A., 2001: Dynamic selection from among an ensemble of lateral boundary conditions for limited-area models. Preprints, 9th AMS Conference on Mesoscale Processes, Ft. Lauderdale, FL., 30 July - 2 Aug. 2001, 361-365.


Palmer, T. N., 2001: A nonlinear dynamical perspective on model error: A proposal for non-local stochastic-dynamic parametrization in weather and climate prediction models. Q. J. R. Meteorol. Soc., 127, 279-304.


Pena, M., E. Kalnay, and M. Cai, 2002: Statistics of coupled ocean and atmosphere intraseasonal anomalies in reanalysis and AMIP data. Nonlinear Processes in Geophysics, under review.


Rennick, M. A., 1995: The ensemble forecast system (EFS). Models Department Technical Note 2-95, Fleet Numerical Meteorology and Oceanography Center. p. 19. [Available from: Models Department, FLENUMMETOCCEN, 7 Grace Hopper Ave., Monterey, CA 93943.]


Richardson, D. S., 2000: Skill and relative economic value of the ECMWF ensemble prediction system. Quart. J. Roy. Meteorol. Soc., 126, 649-668.


Smith, L. A.,  2000: Disentangling Uncertainty and Error: On the Predictability of Nonlinear Systems. In Nonlinear Dynamics and Statistics, ed. Alistair I. Mees, Boston, Birkhauser, 31--64.


Tennant, W., 2000: Applications of ensemble forecasting products for medium-range forecasts. Proceedings of the WMO Workshop on use of ensemble prediction systems. Available as an online document at: PROCEEDINGS/Lecture-15.doc


Toth, Z., 1991: Estimation of atmospheric predictability by circulation analogs. Mon. Wea.  Rev., 119,  65-72.


Toth, Z., and E. Kalnay, 1993: Ensemble Forecasting at the N MC: The generation of perturbations. Bull.  Amer.  Meteorol.  Soc., 74, 2317-2330.


Toth, Z., and E. Kalnay, 1995: Ensemble forecasting with imperfect models.  In: Research Activities in Atmospheric and Oceanic Modelling, Ed. A. Staniforth. CAS/JSC Working Group on Numerical Experimentation, Report No. 21, WMO/TD-No. 665, p. 6.30.


Toth, Z., and E. Kalnay, 1997: Ensemble forecasting at NCEP and the breeding method.  Mon.  Wea. Rev, 125, 3297-3319.


Toth, Z., Y. Zhu, T. Marchok, S. Tracton, and E. Kalnay, 1998: Verification of the NCEP global ensemble forecasts. Preprints of the 12th Conference on Numerical Weather Prediction, 11-16 January 1998, Phoenix, Arizona, 286-289.


Toth, Z., Y. Zhu, and T. Marchok, 2001: The ability of ensembles to distinguish between forecasts with small and large uncertainty. Weather and Forecasting, 16, 436-477.


Trevisan, A., and F. Pancotti, 1998: Periodic orbits, Lyapunov vecotrs, and singular vectors in the Lorenz system.  J. Atmos.  Sci., 55, 390-398.


Vannitsem, S., and C. Nicolis, 1998:  Dynamics of fine-scale variables versus averaged observables in a T21L3 quasi-geostrophic model.   Quart. J. Royal Meteor. Soc., 124, 2201-2226.


Vannitsem, S., and Z. Toth, 2002: Short-term dynamics of model errors. JAS, in print.


Zhu, Y., Z. Toth, R. Wobus, D. Richardson, and K. Mylne, 2002: The economic value of ensemble based weather forecasts. Bull.  Amer.  Meteorol.  Soc., 83, 73-83.