Dept. of Atmospheric Science

University of Washington

A short-range ensemble forecasting (SREF) system was developed at the University of Washington with the goal of producing useful, mesoscale forecast probability (FP). Eight analyses from different operational forecast centers were used as initial conditions (ICs) for running the MM5. Real-time, 0 to 48-h SREF predictions were produced and analyzed for 129 cases over the Pacific Northwest during the 2002-03 cool season.

Although inclusion of model perturbations increased dispersion toward statistical consistency, dispersion remained inadequate. A multimodel poor man's ensemble exhibited greater dispersion and superior performance on the larger spatial scales. Model perturbations improved mesoscale FP skill in both reliability and resolution. Correcting the systematic model error (bias) was an important component of the analysis and also acted to improve FP skill. An ensemble made up of unequally likely members was found to be skillful as long as each member occasionally performed well. The UWME study indicates substantial utility in current SREF systems and suggests several avenues for further development.

from a Probabilistic Perspective

One measure of the utility of ensemble prediction systems is the relationship between ensemble spread and forecast accuracy. Unfortunately, this relationship is often characterized by inadequate measures, such as the spread-error correlation, that make two critical assumptions: (1) a linear dependency between ensemble spread and forecast error and (2) an end user that has a continuous sensitivity to forecast error. Moreover, error forecasts based on a linear regression equation are deterministic and are thus limited in their usefulness.

These issues are investigated with a non-dynamical stochastic model that provides highly idealized ensemble forecasts. The upper bound in expected performance of real ensemble prediction systems is thus quantified. The simple stochastic model allows for the calculation of spread-error correlations under a variety of spread and error metrics. The linear dependence assumption is shown to be invalid.

A fully probabilistic understanding of the forecast error prediction problem is both necessary and beneficial. A probabilistic approach is suggested where Gaussian error climatologies, conditioned on the ensemble spread, are used as forecast error predictions. The ensemble spread-skill relationship is thus quantified by the skill of such probabilistic error forecasts relative to the overall error climatology. For idealized ensemble forecasts, the skill of spread-based, conditional error climatology forecasts is nearly equal to the skill of forecasts taken directly from the ensemble probability density function. It is shown that forecast error predictability is highest for cases with extreme spread and lowest for cases with near-normal spread, reinforcing earlier results. Additionally, end users should choose a spread metric consistent with their own cost function to form the most appropriate error climatologies.