Abstract:

Deep neural networks (DNNs) are a powerful tool for functional approximation. We describe flexible versions of generalized linear and generalized linear mixed models incorporating basis functions formed by a deep neural network. We develop a fully Bayesian treatment method for inference in these DNN-based flexible regression models. Efficient computational methods for Bayesian inference are developed based on Gaussian variational approximation which uses a parsimonious but flexible factor parametrization of the covariance matrix. We implement natural gradient methods for the optimization, exploiting the factor structure of the variational covariance matrix to perform fast matrix vector multiplications in iterative conjugate gradient linear solvers in natural gradient computations. The method can be implemented in high dimensions, and the use of the natural gradient allows faster and more stable convergence of the variational algorithm.  The proposed methods are illustrated in several examples for DNN random effects models and high-dimensional logistic regression with sparse signal shrinkage priors, and we obtain results which are superior to commonly-used flexible regression methods such as the Bayesian Additive Regression Trees method (BART).

Speaker

Minh-Ngoc Tran

Research Area
Affiliation

University of Sydney

Date

Fri, 23/03/2018 - 4:00pm

Venue

RC-4082, Red Centre, UNSW