GFS/NAM Forecast Precipitation Comparison

GFS/NAM Forecast Precipitation Comparison Description

  • Code
  • Models
  • Domain
  • Cases
  • Verification

Codes Employed

The software packages used in the GFS/NAM Forecast Precipitation Comparison Test included:

    • WRF Post Processor (WPP) (v3.2)

    • Model Evaluation Tools (MET) (v3.0)

    • NCAR Command Language (NCL) for graphics generation

    • Statistical programming language, R, to compute confidence        intervals

All model output was run operationally and retrieved directly from the U.S. National Centers for Environmental Prediction (NCEP) for retrospective analysis.

Global Forecast System (GFS) model

    • Global gaussian grid with 0.5 x 0.5 degree resolution

North American Mesoscale (NAM) model

    • E-grid domain with approximately 12-km grid spacing

Domain Configuration

    • The copygb program was used to regrid the native model output to a 15-km and a 60-km contiguous U.S. (CONUS) grid covering the same exact domain (white outline) at differing resolutions.

Click thumbnail for larger image.

    • Grid dimensions
      • 15-km: 400 x 280 gridpoints; Lambert-Conformal map projection
      • 60-km: 100 x 70 gridpoints; Lambert-Conformal map projection

    • The copygb program was also used to regrid the native model output to a third domain (blue shaded area) correpsonding to the 4-km grid used for the Stage II observations.

Click thumbnail for larger image.

    • Grid dimensions
      • 4-km: 1121 x 881 gridpoints; Polar Stereographic map projection (Stage II observation domain)

Cases Run

Forecast Dates: 18 December 2008 -15 December 2009

Initializations: Every 24 hours at 00 UTC daily

Forecast Length: 84 hours; output files available every 3 hours

Verification

Grid-to-grid comparisons were performed to compute objective verification statistics of QPF using the Model Evaluation Tools (MET) package. Lead times of 12-, 24-, 36-, 48-, 60-, 72- and 84-h were examined for 3 hour accumulations and 36-, 60- and 84-h lead times were examined for 24 hour accumulations.

Traditional metrics computed included:

    • Gilbert Skill Score (GSS) and Frequency Bias (FBias)

The traditional verification metrics were accompanied by confidence intervals (CIs), at the 99% level, computed using a bootstrapping technique. When comparing the models, a conservative estimation of statistically significant (SS) differences was employed based solely on whether the aggregate statistics with the accompanying CIs overlapped. If no overlap was noted for a particular threshold, the differences between the models were considered SS.

More advanced spatial verification techniques to better associate precipitation differences with different model horizontal scales were also applied, including:

    • Method for Object-based Diagnostic Evaluation (MODE) and Fractions Skill Score (FSS)

Many unique attributes were examined with the MODE output, including (but not limited to) centroid distance, boundary distance, angle difference, area ratio and intersection area ratio. For FSS, the forecast skill was examined with varying spatial scale.

Area-averaged verification results were computed for the full verification domain.