Optimization is proving to be a vital tool in machine learning, as a means to formulate and solve problems in the area. Recently, the complex challenges posed by large data sets and the demands for a greater variety of analysis and learning tasks have brought a wider range of optimization tools into play. Among these tools are stochastic gradient methods, sparse optimization methods, enhanced first-order methods, coordinate descent, and approximate second-order methods. In this talk, we survey a variety of problem formulations and outline the optimization techniques that are relevant in each case, highlighting some recent developments in such areas as parallel stochastic gradient descent.


Professor Stephen Wright

Research Area

Applied Seminar


University of Wisconsin-Madison


Mon, 28/05/2012 - 3:00pm to 4:00pm