A
comparison of the Model Evaluation Tools (MET) with NCEP EMC and NHC
Verification Systems
Tara
Jensen
DTC
2:30 pm November 6 in Room 2155
Abstract:
The Developmental Testbed Center (DTC) serves as a bridge between
research and operations for numerical weather forecasts. Thus, the DTC
staff have developed and incorporated a number of strategies for doing
meaningful evaluations of these forecasts to ensure that the weather
community can trust that real and significant improvements are realized
prior to operational implementation. Use of appropriate metrics;
accurate estimates of uncertainty; consistent, independent
observations; and large, representative samples are essential elements
of a meaningful evaluation. Spatial, temporal, and conditional analyses
should be incorporated wherever appropriate.
Numerous new methods have been proposed to improve quantification and
diagnoses of forecast performance with use of spatial, probabilistic,
uncertainty and ensemble information. The complexity of these methods
makes verification software development difficult and impractical for
many users. Further, verification results using the same methods may
not be comparable when different software is used. To address these
issues, community verification software has been developed through a
joint partnership between the Developmental Testbed Center (DTC) and
the Research Applications Laboratory (RAL) at the National Center for
Atmospheric Research (NCAR) to provide a consistent and complete set of
verification tools. This software package, the Model Evaluation Tools
(MET; http://www.dtcenter.org/met/users/index.php), is free,
configurable, and supported by the DTC and has been adapted to work
with an ever-increasing variety of forecast and observation types,
including a tool for evaluating tropical cyclone track and intensity
(MET-TC). New verification methods and dataset options are added each
year, with input from the community and from developments in the
published literature.
Several years ago, METv3.0.1 was directly compared with the NCEP
Verification System (NVS) and Quantitative Precipitation Forecast (QPF)
Verification System (QVS). Both systems provide the means to generate
partial sums or matched pairs from which standard statistics can be
derived. This evaluation was performed on a subset of the 2007 DTC Core
Test and successfully demonstrated the ability of MET to reproduce
verification metrics similar to the metrics generated by NVS and QVS.
The results of the study were provided to NCEP EMC as well as published
to the MET website in 2011
(http://www.dtcenter.org/met/users/docs/write_ups/dtc_ncep_201109.pdf).
Similar comparisons were performed for the MET-TC (v4.0.1) tool as
compared with the National Hurricane Center (NHC) verification package.
The dataset compiled for the Hurricane Forecast Improvement Program
(HFIP) 2012 Retrospective Tests. This evaluation showed the output from
MET-TC was consistent with those from the NHC package. This
presentation will provide an overview of these two evaluations.
Similarities and differences between the packages will be discussed.
This seminar will be followed by a brief introduction on how to set up
MET and its configuration files along with a question and answer
session.