In a statistical world faced with an explosion of data, regularization has become an important ingredient. In many problems, we have many more variables than observations, and the lasso penalty and its hybrids have become increasingly useful. This talk presents a general framework for fitting large scale regularization paths for a variety of problems. We describe the approach, and demonstrate it via examples using our R package GLMNET. We then outline a series of related problems using extensions of these ideas.

(This is a seminar in the SSAI AGR Seminar Series, via the Access Grid. The host institution is UTS.)


Trevor Hastie (via Access Grid)

Research Area

Stanford University


Tue, 09/07/2013 - 4:00pm


RC-4082, The Red Centre (via Access Grid from UTS)