Abstract: 

We introduce a new Markov chain Monte Carlo (MCMC) sampler that iterates by constructing conditional importance sampling (IS) approximations to target distributions. We present Markov interacting importance samplers (MIIS) in general form, followed by examples to demonstrate their flexibility. A leading application is when the exact Gibbs sampler is not available due to infeasibility of direct simulation from the conditional distributions. The MIIS algorithm uses conditional IS approximations to jointly sample the current state of the Markov Chain and estimate conditional expectations (possibly by incorporating a full range of variance reduction techniques). We compute Rao-Blackwellized estimates based on the conditional expectations to construct control variates for estimating expectations under the target distribution. The control variates are particularly efficient when there are substantial correlations in the target distribution, a challenging setting for MCMC. We also introduce the MIIS random walk algorithm, designed to accelerate convergence and improve upon the computational efficiency of standard random walk samplers. Simulated and empirical illustrations for Bayesian analysis of the mixed Logit model and Markov modulated Poisson processes show that the method significantly reduces the variance of Monte Carlo estimates compared to standard MCMC approaches, at equivalent implementation and computational effort.

 

Speaker

Dr Eduardo Mendes

Research Area
Affiliation

UNSW Business School

Date

Thu, 19/03/2015 - 4:00pm

Venue

RC-4082, The Red Centre, UNSW