The second installment of the Chamberlain (Virtual) Econometrics Seminar
featured Tim Armstrong presenting new work on Empirical Bayes Confidence
Intervals coauthored with Michal Kolesár and Mikkel Plagborg-Møller,
and available from http://arxiv.org/abs/2004.03448. Their approach seemed to
be motivated by recent work of Chetty using James-Stein shrinkage. The
objective was to produce "robust" confidence intervals for such linear
shrinkage estimators, and the technique employed was a clever application
of the method of moment spaces. This seemed to perform well in the
classical Gaussian sequence model when the mixing distribution was itself
Gaussian. But as shown by Bruce Hansen in his discussion of the paper
following Tim's presentation, the procedure exhibits severe under
coverage in a typical Gaussian outlier model. Thus, in the 1953 terminology
of Box it apparently possesses neither "robustness of validity", i.e. coverage,
nor "robustness of efficiency", i.e. length.
This led me to wonder how nonparametric empirical Bayes methods would
perform under such circumstances. Bruce was kind enough to provide
code for his simulations and I spent the Easter weekend exploring this
question. The result is yet another R Vinaigrette. I compare two
flavors of nonparametric empirical Bayes Confidence intervals, both
based on what Efron calls G-modeling. A nonparametric estimate of the
mixing distribution called G serves as a "plug-in prior" and intervals
are then based on posterior quantiles. We consider two estimators for G:
the classical NPMLE of Kiefer and Wolfowitz, and the log-spline estimator
of Efron. Spoiler alert: in keeping with the samokritika (self-criticism)
spirit of the Vinaigrette -- Efron wins. Links to the text and the code
for the simulations reported therein can be found at:
~
Thanks, Roger, for this thoughtful Monte Carlo analysis! Following our email discussion, we're taking you up on your suggestion of posting a response here on your blog.
ReplyDeleteYour comments, along with Bruce's discussion, inspired us to develop finite sample corrections, in the spirit of Morris's original work on the topic. We found that these lead to excellent average coverage in the Bruce's designs and the additional designs in your note, as well as other Monte Carlo designs inspired by one of our applications. These are discussed in the most recent (June 2020) version of the paper
https://arxiv.org/abs/2004.03448
and we've prepared a note with more details on the performance in your Monte Carlo designs here:
https://www.dropbox.com/s/4ui879yxcc99imv/ebci_koenker_reply.pdf?dl=0
Thanks again for taking the time to make these thoughtful comments and Monte Carlos!
Best,
Tim, Michal and Mikkel