Forward-backward splitting method is a well-known and efficient algorithm for solving nonsmooth convex optimization problems. The most popular complexity of this method is O(k−1)O(k−1). However, in practice, numerical experiences usually show evidences of linear convergences. In this seminar, I will explain this phenomenon by using tools of generalized differentiations and variational analysis. The approach also allows us to reveal new information about the method as well as obtain new conditions that guarantee the linear convergence of this method when solving some structured optimization problems such as ℓ1ℓ1-regularized, nuclear norm-regularized, and partly smooth ones.


Dr. Nghia T.A. Tran

Research Area



Oakland University, USA


Wed, 03/08/2016 - 11:05am to 11:55am


RC-4082, The Red Centre, UNSW