Meaningful
Evaluation Strategies for Numerical Model Forecasts using MET
Tara
Jensen
NCAR
July 29th 3:45 pm in room 2155
Abstract:
The Developmental Testbed Center (DTC) serves as a bridge between
research and operations for numerical weather forecasts. Thus, the DTC
staff have developed and incorporated a number of strategies for doing
meaningful evaluations of these forecasts to ensure that the weather
community can trust that real and significant improvements are realized
prior to operational implementation. Use of appropriate metrics;
accurate estimates of uncertainty; consistent, independent
observations; and large, representative samples are essential elements
of a meaningful evaluation. Spatial, temporal, and conditional analyses
should be incorporated wherever appropriate.
Numerous new methods have been proposed to improve quantification and
diagnoses of forecast performance with use of spatial, probabilistic,
uncertainty and ensemble information. The complexity of these methods
makes verification software development difficult and impractical for
many users. Further, verification results using the same methods may
not be comparable when different software is used. To address these
issues, community verification software has been developed through a
joint partnership between the Developmental Testbed Center (DTC) and
the Research Applications Laboratory (RAL) at the National Center for
Atmospheric Research (NCAR) to provide a consistent and complete set of
verification tools. This software package, the Model Evaluation Tools
(MET; http://www.dtcenter.org/met/users/index.php), is free,
configurable, and supported by the DTC and has been adapted to work
with an ever-increasing variety of forecast and observation types. New
verification methods and dataset options are added each year, with
input from the community and from developments in the published
literature.
MET allows users to verify forecasts via traditional, neighborhood, and
object-based methods. To account for the uncertainty associated with
these measures, methods for estimating confidence intervals for the
verification statistics are an integral part of MET. The latest release
is set for August 2014 and will include many enhancements for users.
MET will now accept data in NetCDF-CF compliant format and will have an
expanded set of continuous and categorical statistics. The tropical
cyclone verification capabilities have also been enhanced.
Autoconf has been added to make compilation easier and output file
sizes have been reduced drastically to assist operational users. DTC
has also developed a database and display system for internal use
called METViewer. The prototype was recently reworked for
enchanced utility and METViewer version 1.0 will be ready for friendly
beta-testers as well.
This presentation will provide an overview of verification methods
included in MET and METViewer to highlight their capabilities for both
operational and diagnostic use. Examples of evaluations from
DTC projects will be presented to demonstrate the existing and
up-coming verification capabilities.