Kernel estimators have been popular for decades in long-run variance estimation. To minimize the loss of efficiency measured by the mean-squared error in important aspects of kernel estimation, we propose a novel class of converging kernel estimators that have the “no-lose” properties including: (1) no efficiency loss from estimating the bandwidth as the optimal choice is universal; (2) no efficiency loss from ensuring positive-definiteness using a principle-driven aggregation technique; and (3) no efficiency loss asymptotically from potentially misspecified prewhitening models and transformations of the time series. A shrinkage prewhitening transformation is proposed for more robust finite-sample performance. The estimator has a positive bias that diminishes with the sample size so that it is more conservative compared with the typically negatively biased classical estimators. The proposal improves upon all standard kernel functions and can be well generalized to the multivariate case. We discuss its performance through simulation results and two real-data applications including the forecast breakdown test and MCMC convergence diagnostics.


Kin Wai (Keith) Chan

Research Area

Statistics seminar


The Chinese University of Hong Kong


Friday, 16 June 2023, 4pm


Zoom (link below)