High-Resolution Hurricane Test

HWT 2009 Spring Experiment Objective Verification


The Experimental Forecast Program (EFP) component of the NOAA Hazardous Weather Testbed (HWT) has conducted Spring Experiments since 2000. The main focus of recent Spring Experiments has been to gain an understanding of how to better use the output of near-cloud resolving configurations of numerical models to predict convective storms.  The primary organizers of the HWT-EFP are the National Severe Storms Laboratory (NSSL) and the Storm Prediction Center (SPC).   The experiences of the HWT-EFP participants have shown that the high resolution convective storm predictions are at times difficult for operational forecasters to reconcile, in part because many solutions appear to be plausible for a given mesoscale environment.  The 2009 HWT Spring Experiment page may be found at: http://hwt.nssl.noaa.gov/Spring_2009

The Model Evaluation Tools (MET), developed by the Development Testbed Center (DTC), was used in 2008 and will be used in 2009 to help evaluate WRF model performance for the Spring Experiment.  Three important goals of these evaluations have been (i) to provide objective evaluations of the experimental forecasts, ii) to supplement and compare to subjective assessments of performance; and (iii) to expose the forecasters and researchers to both new and traditional approaches for evaluating precipitation forecasts.

 MET provides a variety of statistical tools for evaluating model-based forecasts using both gridded and point observations.  WRF model forecasts of 1-h accumulated precipitation were evaluated using the Grid_stat and MODE tools within MET.  Grid_stat applies traditional verification methods for gridded datasets. These methods include verification metrics such as the Equitable Threat Score (ETS), Bias, and a host of other statistics.  MODE, the Method for Object-based Diagnostic Evaluation, provides an object-based verification of gridded forecasts by identifying and matching "objects" (i.e. areas of interest) in the forecast and observed fields and comparing the attributes of the forecast/observation object pairs. 

In 2008, the DTC evaluated thirty-three cases from NCEP’s Environmental Modeling Center (EMC) and NSSL 4-km WRF runs (NMM and ARW dynamic cores, respectively). This year, the DTC will be evaluating forecasts from two members of the CAPS Storm Scale Ensemble Forecast (SSEF) and the High Resolution Rapid Refresh grids being provided for the Vortex 2 project that runs at the same time.