Hazardous Weather Testbed Collaborations

Overview

During NOAA Testbed experiments that occur throughout the year [e.g., Hazardous Weather Testbed (HWT), Weather Prediction Center (WPC) Hydrometeorological Testbed (HMT)], a plethora of experimental model data is produced to support the typically several weeks-long events. While this data is subjectively assessed daily during the experiments, there is often times a lack of extensive objective verification after the experiment to thoroughly investigate the contributed model configuration strengths and weaknesses. The large datasets produced during these testbed experiments provide an excellent opportunity to help identify and begin to answer the most pressing scientific questions that need to be addressed.

During the 2016 HWT Spring Forecasting Experiment (SFE), an effort to coordinate the contributed model output from participating groups around a unified setup (e.g., WRF versions, domain size, vertical levels and spacing, etc.) was undertaken to create a super-ensemble of over 60 members called the Community Leveraged Unified Ensemble (CLUE). The careful coordination and construction of CLUE allowed for meaningful comparisons among a variety of members to be performed. In order to improve the operational convection-allowing ensemble into the future, it is critical to investigate key scientific questions related to informing the best configuration strategies for producing such an ensemble based on an evidence-driven approach.

Many questions remain regarding the best approach to constructing a convection-allowing model (CAM) ensemble system. For example, should model uncertainty be addressed through multiple dynamic cores, multiple physics parameterizations, stochastic physics, or some combination of these? CLUE provides the datasets necessary to begin to explore this question; the methods targeted for this work include examining single physics/core vs. multi-physics and/or multi-core approaches. Ultimately, the probabilistic forecast performance of each targeted ensemble subset are examined. Individual deterministic forecasts from select members are also assessed to understand their contribution to the overall ensemble spread.

The extensive evaluation to investigate the probabilistic performance of ensembles constructed from select CLUE members, along with deterministic forecasts from individual members available in CLUE, was conducted using the Model Evaluation Tools (MET) software system. The metrics used for probabilistic and deterministic evaluation included both traditional metrics widely used in the community (spread, skill, error, reliability, etc.) and newer methods that provide additional diagnostic information, especially at higher resolution. These newer approaches included the Method for Object-based Diagnostic Evaluation (MODE) and neighborhood methods applied to deterministic and probabilistic output (e.g., Fractions Skill Score). Along with standard meteorological fields to highlight overall model performance, an evaluation of a small number of severe weather storm-attribute fields readily available in model output and analysis fields was conducted.