HWRF HD33/H21A Forecast Comparison

HD33/H21A Forecast Comparison Description

  • Code
  • Models
  • Domain
  • Cases
  • Verification

Codes Employed

The software packages used in the HWRF HD33/H21A Forecast Comparison Test included:

    •WRF - revision 4947

    •WPS - revision 602

    •UPP - beta release v0.5c (revision 75)

    •GSI - official release v2.5

    •Vortex relocation and initialization, prep_hybrid, miscellaneous       libraries and tools - hwrf-utilities revision 245

    •Princeton Ocean Model (POM) and POM initialization - revision 85

    •NCEP Coupler - revision 37

    •GFDL Vortex Tracker - revision 53

    •National Hurricane Center Verification System and scripts for       aggregation of verification statistics and computation of       confidence intervals - revision 28

HWRF Model: HD33 Configuration

    2011 operational HWRF configured from the community code       repositories and run by DTC.

HWRF Model: H21A Configuration

    2011 Operational HWRF configured from the NCEP/EMC code       repositories and run by EMC.

Differences in Configuration:
HD33 H21A
Institution DTC EMC
Platform Linux IBM
Source code Community EMC
Scripts DTC EMC
Automation NOAA GSD Workflow Manager EMC HWRF History Sequence Manager
I/O format NetCDF Binary
UPP UPP Beta v0.5c EMC UPP customized for HWRF
Tracker Community repository EMC operational
Sharpening in ocean init Used in spin up Phases 3 and 4 Used in Phase 3 only (known bug)
Snow Albedo Older dataset Newer dataset

Domain Configuration

    The HWRF domain was configured the same way as used in the NCEP/EMC operational system. The atmospheric model employed a parent and a movable nested grid. The parent grid covered a 75x75 deg area with approximately 27 km horizontal grid spacing. There were a total of 216 x 432 grid points in the parent grid. The nest covered a 5.4 x 5.4 deg area with approximately 9 km grid spacing. There were a total of 60 x 100 grid points in the nest. The location of the parent and nest, as well as the pole of the projection, varied from run to run and were dictated by the location of the storm at the time of initialization.

    HWRF was run coupled to the POM ocean model for Atlantic storms and in atmosphere-only mode for East Pacific storms. The POM domain for the Atlantic storms depended on the location of the storm at the initialization time and on the 72-h NHC forecast for the storm location. Those parameters defined whether the East Atlantic or United domain of the POM was used.

    The image shows the atmospheric parent and nest domains (yellow) and the United POM domain (blue).

Click for larger image.

Cases Run

Storms: 31 complete storms from the 2010 season.

    • 2010 Atlantic: Alex, Two, Bonnie, Collin, Five, Danielle, Earl, Fiona,
      Gaston, Hermine, Igor, Julia, Karl, Lisa, Matthew, Nicole, Otto,
    • 2010 Pacific: Blas, Celia, Darby, Six, Estelle, Eight, Frank, Ten,
      Eleven, Georgette

Initializations: Every 6 h, in cycled mode.

Forecast Length: 126 hours; output files available every 6 hours


The characteristics of the forecast storm (location, intensity, structure) were compared against the Best Track using the National Hurricane Center (NHC) Verification System (NHCVx). The HD33 ATCF files were produced by the DTC as part of this test, while the H21A ATCF files were supplied by NOAA/NCEP/EMC. The NHCVx was run separately for each case, at 6-hourly forecast lead times, out to 120 h, in order to generate a distribution of errors. Verification was performed for any geographical location for which Best Track was available, including over land. No verification was performed when the observed storm was classified as a low or wave.

An R-statistical language script was run on an homogenous sample of the HD33 and H21A datasets to aggregate the errors and to create summary metrics including the mean and median of track error, intensity error, absolute intensity error, and radii of 34, 50, and 64 kt wind in all four quadrants. All metrics are accompanied of 95% confidence intervals to describe the uncertainty in the results due to sampling limitations.

For the purposes of comparing the HD33 and H21A forecasts, pairwise differences (HD33-H21A) of track error, and absolute intensity error, and absolute wind radii error were computed and aggregated with a R-statistical language script. Ninety-five percent confidence intervals on the median were computed to determine if there is a statistically significant difference between the two configurations.