For standard discrete models, bootstrap P-values perform extraordinarily well, much better than asymptotics predict. I trace this to three specific mathematical properties of the bootstrap transformation. First, it corrects boundary anomalies. Second, it improves pivotality of the Pvalue. Third, it correctly calibrates the P-value, but apparently only when applied to likelihood based test statistics. I then move on to how to compute bootstrap (and other more exotic P-values) using importance sampling. This involves sampling whole curves from the profile distribution of the P-value. The standard recommendation for choosing the biasing distribution fails, at least for the case of logistic regression. I will identify why it fails and develop a better criterion which works extremely well in all examples considered.

About the speaker: Chris Lloyd is Professor and Associate Dean of Research at the Melbourne Business School, The University of Melbourne. His expertise stretches from statistics to market research, within both academic and business environments. He has consulted widely on behalf of the scientific community, as well as for several prominent legal firms. He is also managing editor and joint theory and methods editor of the Australian and New Zealand Journal of Statistics.


Professor Chris Lloyd

Research Area

Statistics Seminar




Fri, 30/07/2010 - 4:00pm