This study proposes a dynamical performance-ranking method, called the Du-Zhou Ranking Method, to predict the relative performance of individual ensemble members by assuming the ensemble mean is a good estimation of truth. The results from this study show that the method (1) generally works well, especially for shorter ranges such as a one-day forecast; (2) has less error in predicting the extreme performers (e.g., the best and worst members) than the intermediate performers between; (3) works better when the variation in performance among ensemble members (called “error separation”) is large; (4) works better when model bias is small; (5) works better in a multi-model than in a single-model ensemble environment; and (6) works best when using the magnitude difference between a member and its ensemble mean as the “Distance” measure in ranking members. The ensemble mean and median forecasts generally perform quite similar to each other.
As a demonstration, this method was applied to a weighted ensemble average to see if it can improve the ensemble mean forecast over a commonly used, simple equally-weighted ensemble averaging method. The result indicates that the weighted ensemble mean forecast based on this ranking method has a smaller systematic error. This superiority of the weighted over the simple mean is especially true for smaller-sized ensembles, such as 5 and 11 members, but it decreases with the increase in ensemble size and almost vanishes when the ensemble size increases to 21 members. There is, however, little impact on the random error and spatial patterns of ensemble mean forecasts. These results imply that it might be difficult to improve the ensemble mean by just weighting members when an ensemble reaches a certain size. However, it is expected that the effectiveness of weighted-averaging should be improved when ensemble spread is improved or when the ranking method itself is improved.