Penalized versus Generalized Quasi-likelihood Inference in GLMM
- Speaker: Professor Brajendra Sutradhar, Memorial University of Newfoundland, Canada
- Time: 1:00p.m. Monday 16th May 2005
- Venue: Red Center Room RC-4082,near Barker Street Gate 14
For the estimation of the main parameters in the generalized linear mixed
model (GLMM) set up, the penalized quasi-likelihood (PQL) approach,
analogous to the best linear unbiased prediction (BLUP), treats the random
effects as fixed effects and estimate them as such. The regression and
variance components of the GLMMs are then estimated, based on the estimates
of the so-called random effects. Consequently, the PQL approach may or may
not yield consistent estimate for the variance component of the random
effects, depending on the cluster size and the associated design matrix. In
this talk, we introduce an exact quasi-likelihood approach that always
yields consistent estimators for the parameters of the GLMMs. This approach
also yields more efficient estimators for the parameters as compared to the
estimators obtained by a recently introduced simulated moment approach.
Binary and Poisson mixed models are considered, for example, to compare the
asymptotic efficiencies.
You are invited to lunch with the speaker before the seminar, meeting on level 1 at 12:15.
David Warton
e-mail: David.Warton@unsw.edu.au