Extended Core Test
Extended Core Test
- Code
- Domain
- Model
- Initialization
- Cases
- Verification
Codes Employed
The components of the end-to-end forecast system used in the Extended Core Test included:
• WRF Preprocessing System (WPS) (v2.2)
• WRF model (Note: The WRF code used in this test does not
correspond to a public release. Instead, a snapshot of the top of
the WRF code repository as of August 29, 2007 was used. The
choice of code was based on the need to use the latest code
developments, especially the unified Noah LSM, which was not
available in the public release (WRF v2.2))
• WRF Post Processor (WPP) (v2.2)
• NCEP Verification System
• NCAR Command Language (NCL) for graphics generation
• Statistical programming language, R, to perform aggregations
and compute confidence intervals
Domain Configuration
• CONUS domain with roughly 13-km grid spacing (selected such
that it fits within the RUC13 domain)
Click thumbnail for larger image.
Figure above shows the boundaries of the computational domain used for the ARW (dashed line), the NMM (dotted line) and the post-processed domain (solid line).
• Grid dimensions:
• NMM: 280 x 435 gridpoints, for a total of 121,800 horizontal
gridpoints
• ARW: 400 x 304 gridpoints, for a total of 121,600 horizontal
gridpoints
• Post-processed: 451 x 337 gridpoints, for a total of 151,987
horizontal gridpoints
• 58 vertical levels - (Note: An exact match in vertical levels is not
possible because the ARW uses a sigma-pressure vertical
coordinate, while the NMM uses a hybrid system, with sigma-
pressure levels below 300 hPa and isobaric levels aloft)
• Projections:
• NMM: Rotated Latitude-Longitude projection
• ARW: Lambert-Conformal map projection
Sample Namelists
ARW configuration:
namelist.wps
namelist.input
NMM configuration:
namelist.wps
namelist.input
Physics Suite
Microphysics: | Ferrier |
Radiation (LW/SW): | GFDL |
Surface Layer: | Janic |
Land Surface: | Noah |
PBL: | Mellor-Yamada-Janjic |
Convection: | Betts-Miller-Janjic |
Other run-time settings
• Timestep:
• ARW: Long timestep = 72 s; Acoustic timestep = 18 s
• NMM: Long timestep = 30 s
• Calls to the boundary layer, microphysics and cumulus
parameterization were made every time step for the ARW
(every 72 s) and every other time step for the NMM (every 60 s)
• Calls to radiation were made every 30 minutes
• Sample namelist.input for ARW and NMM
The end-to-end system for this test did not include a data assimilation component.
Initial and Boundary Conditions
• Initial conditions (ICs) and Lateral Boundary Conditions
(LBCs): North American Mesoscale Model (NAM212)
(Note: For the retrospective period used, the forecast
component of the NAM was the Eta model.)
• Sea Surface Temperature (SST) Initialization: NCEPs daily,
real-time SST product
Cases Run
The ARW and NMM dynamic cores were used to forecast 120 cases divided into the four seasons. The runs were initialized every 36 h, therefore, alternating 00 and 12 hr cycles, and run out to 60 hours.
Summer: | 09 July - 24 August 2005 |
Fall: | 10 October - 23 November 2005 |
Winter: | 10 January - 22 February 2006 |
Spring: | 10 April - 23 May 2006 |
The table below lists the forecast cases that were not verified due to the reasons described in the table.
Missing Verification:
Forecast cycle | Affected Case | Missing Data | Reason |
Winter | 2006011300 | Incomplete 24-hr and 3-hr QPF verification | Missing RFC and ST2 analysis |
2006011412 | Incomplete 24-hr and 3-hr QPF verification | Missing RFC and ST2 analysis | |
2006011712 | Incomplete sfc/ua verification | Missing RUC Prepbufr | |
2006011900 | Incomplete sfc/ua verification | Missing RUC Prepbufr | |
2006021312 | Incomplete 3-hr QPF verification | Corrupt ST2 analysis | |
2006021500 | Incomplete 3-hr QPF verification | Corrupt ST2 analysis | |
2006021612 | Incomplete sfc/ua verification | Missing RUC Prepbufr | |
2006021800 | Incomplete sfc/ua verification | Missing RUC Prepbufr | |
Fall | 2005101212 | Incomplete 3-hr QPF verification | Corrupt ST2 analysis |
2005103012 | Incomplete sfc/ua verification | Missing RUC Prepbufr | |
2005110100 | Incomplete sfc/ua verification | Missing RUC Prepbufr | |
2005110812 | Incomplete sfc/ua verification | Missing RUC Prepbufr | |
2005111000 | Incomplete sfc/ua verification | Missing RUC Prepbufr | |
Summer | 2005070912 (NMM only) |
Incomplete 3-hr QPF verification (at 60-hr) | Unknown |
2005072812 (NMM only) |
Incomplete 3-hr QPF verification (at 60-hr) | Unknown | |
Spring | 2006042200 | Incomplete 24-hr QPF and sfc/ua verification | Missing RFC analysis and RUC Prepbufr |
2006042312 | Incomplete 24-hr QPF and sfc/ua verification | Missing RFC analysis and RUC Prepbufr | |
2006050100 | Incomplete 3-hr QPF verification | Missing ST2 analysis | |
2006051300 | Incomplete sfc/ua verification | Missing RUC Prepbufr | |
2006051412 | Incomplete sfc/ua verification | Missing RUC Prepbufr | |
2006052312 | Incomplete sfc/ua verification | Missing RUC Prepbufr |
Verification
The NCEP Verification System is comprised of:
• Surface and Upper Air Verification System (grid-to-point
comparison)
• Quantitative Precipitation Forecast (QPF) Verification
System (grid-to-grid comparison)
From these, model verification partial sums (aggregated by geographical region using the mean) were generated and objective model verification statistics were then computed using the statistical programming language, R. Confidence intervals (CIs), at the 99% level, were applied to each of the variables using the appropriate statistical method.
Objective verification statistics generated included:
• Bias-corrected Root Mean Square Error (BCRMSE) and Mean
Error (Bias) for:
• Surface Temperature (2 m), Relative Humidity (2 m) and
Winds (10 m)
• Upper Air Temperature, Relative Humidity and Winds
• Equitable Threat Score (ETS) and Frequency Bias (FBias) for:
• 3-hr and 24-hr Precipiation Accumulation intervals
Verification statistics were only computed for cases that ran to completion for both configurations. This allowed for a pair-wise difference technique, which takes advantage of the fact that both configurations faced the same forecast challenge for all cases, to be employed in the determination of statistically significant differences between the two configurations. The CIs on the pair-wise differences between statistics for the two configurations objectively determines whether the differences are statistically significant (SS).
Verification results were computed for select spatial aggregations, including the entire CONUS (G164), CONUS-West (G165), and CONUS-East (G166) domains (shown here).