Valentina Corradi
;
Sainan Jin
;
Norman R. Swanson

robust forecast superiority testing with an application to assessing pools of expert forecasters

We develop a forecast superiority testing methodology which is robust to the choice of loss function. Following Jin, Corradi and Swanson (JCS: 2017), we rely on a mapping between generic loss forecast evaluation and stochastic dominance principles. However, unlike JCS tests, which are not uniformly valid, and have correct asymptotic size only under the least favorable case, our tests are uniformly asymptotically valid and non-conservative. These properties are derived by first establishing uniform convergence (over error support) of HAC variance estimators. Monte Carlo experiments indicate good finite sample performance of the new tests, and an empirical illustration suggests that prior forecast accuracy matters in the Survey of Professional Forecasters. Namely, for our longest forecast horizons (4 quarters ahead), selecting pools of expert forecasters based on prior accuracy results in ensemble forecasts that are superior to those based on forming simple averages and medians from the entire panel of experts.

Data and Resources

Suggested Citation

Corradi, Valentina; Jin, Sainan; Swanson, Norman R. (2023): Robust Forecast Superiority Testing with an Application to Assessing Pools of Expert Forecasters. Version: 1. Journal of Applied Econometrics. Dataset. http://dx.doi.org/10.15456/jae.2023004.1604544168

JEL Codes