Generative adversarial networks (GANs) are an approach to fitting generative models over complex structured spaces. Within this framework, the fitting problem is posed as a zero-sum game between two competing neural networks which are trained simultaneously. Mathematically, this problem takes the form of a saddle-point problem; a well-known example of the type of problem where the usual (stochastic) gradient descent-type approaches used for training neural networks fail. In this talk, we rectify this shortcoming by proposing a new method for training GANs that has both: (i) theoretical guarantees of convergence, and (ii) does not increase the algorithm's per iteration complexity (as compared to gradient descent). The theoretical analysis is performed within the framework of monotone operator splitting.

Matthew Tam is Lecturer in Operations Research and a DECRA Fellow in the School of Mathematics and Statistics at the University of Melbourne. He received a PhD from the University of Newcastle in 2015 under the supervision of Jonathan Borwein, where he worked on iterative projection algorithms for optimisation. He then moved to the University of Göttingen (Germany) where he was a post-doctoral researcher with Russell Luke, supported initially by DFG-RTG2088 (“Discovering structure in complex data”) and later by a fellowship from the Alexander von Humboldt Foundation. Prior to joining the University of Melbourne, he was Junior Professor for Mathematical Optimisation within the Institute for Numerical and Applied Mathematics, also at the University of Göttingen.



Meeting ID: 945 8521 6229

Password: 331658


Matthew Tam

Research Area

Applied Seminar


University of Melbourne


Thu, 14/05/2020 - 11:05am