GF Testing

Global Model Configuration

  • Code
  • Model
  • Initialization
  • Cases
  • Verification
  • Diagnostics

Codes Employed

The components of the end-to-end forecast system used in the convective parameterization testing included:

    • NEMS-GSM model (r85909)

    • NCEP's Unified Post Processor (v7.5.1)

    • Model Evaluation Tools (MET v5.2)

    • Python for graphics generation

Scripts and Automation

This test employed two complimentary workflows, including:

    • A modified version of EMC's workflow (v14.1.0) for running the NEMS-based GFS, including setting up environment variables, running the forecast model, post-processing, tracking tropical cyclones, and detecting tropical cyclogenesis.

    • GMTB-established scripts automated using the Rocoto Workflow Management System with the functionality to stage datasets, create forecast graphics, run forecast verification, archive results, and purge the disk.

Control Configuration (SAS)

Microphysics: Zhao-Carr
Radiation (LW/SW): RRTMG
Gravity Wave Drag: Orographic and Convective Gravity Wave Drag
PBL: Hybrid Eddy-diffusivity Mass-flux Scheme
Deep Convection: Simplified Arakawa-Schubert
Shallow Convection: Mass-Flux based SAS

Test Configuration (GF)

Microphysics: Zhao-Carr
Radiation (LW/SW): RRTMG
Gravity Wave Drag: Orographic and Convective Gravity Wave Drag
PBL: Hybrid Eddy-diffusivity Mass-flux Scheme
Deep Convection: Grell-Freitas
Shallow Convection: Grell-Freitas

Other settings

• Additional configuration parameters used for GF:
    imid = 0 (mid-level clouds turned off)
    ichoice = 0 (deep convection closure option)
    ichoice_s = 2 (shallow convection closure option)
    dicycle = 1 (diurnal cycle adjustment turned on)

Initial Conditions

Initial conditions (ICs): Operational GFS analyses (T1534)

Pre-processing Component

The operational GFS analyses were run through the global_chgres pre-processing code to convert the input files from T1534 to T574.

Cases Run

Forecast Date Range: June - August 2016

Initializations: Daily at 00 UTC

Forecast Length: 240 hours; output files generated every 6 hours

Verification

The Model Evaluation Tools (MET) package is comprised of:

    • Grid-to-point comparisons - Surface and upper-air model data
    • Grid-to-grid comparisons - QPF and anomaly correlation

MET was used to generate objective verification statistics, including:

    • Root Mean Square Error (RMSE) and mean error (bias) for:
        • Surface (CONUS only): temperature (2 m), relative humidity
           (2 m), and winds (10 m)
        • Upper-air: temperature, relative humidity, and winds

    • Equitable Threat Score (ETS) and frequency bias for:
        • 6-hr (CONUS only) and 24-hr precipitation accumulations

    • Anomaly Correlation (AC) for:
        • 500 hPa geopotential height (Northern and Southern
           Hemispheres)

The MET package for tropical cyclone verification (MET-TC) was used to generate statistics, including:

    • Absolute track error, along-track error, cross-track error,        absolute intensity error, and intensity error for:
        • Atlantic, Eastern North Pacific, and Western Pacific Basins

Each type of verification metric is accompanied by confidence intervals (CIs), at the 95% level, computed using a parametric method for the surface and upper air variables and a boostrapping method for precipitation.

Both configurations were run for the same cases allowing for a pairwise difference methodology to be applied, as appropriate. The CIs on the pairwise differences between statistics for the two configurations objectively determines whether the differences are statistically significant (SS).

Area-averaged verification results were computed for the CONUS domain, CONUS East and West domains, 14 CONUS sub-regions, and global sub-regions.

Tropical Cyclogensis

Skillful forecasting of Tropical Cyclogenesis (TG) is a difficult challenge that helps provide insight on overall performance of a model. In addition to the verification of existing storms, TG counts were obtained from the cyclogenesis files generated from NCEP tracker software and compared against the development of new storms as described in the Best Track files.

An investigation of global cyclogenesis during the test period indicated 54 storms in the Best Track data. The figure provided here shows the first reported TG counts during daily retrospective forecasts. GF has a tendency to generate more TGs than SAS over the three-month period.