Liam Hodgkinson
Abstract
Computing the eigenvalues of large matrices is becoming increasingly critical across a range of machine learning applications. Yet, as datasets grow, forming full covariance or kernel matrices often becomes impractical. While matrix-vector product-based methods provide scalable alternatives, they often falter on ill-conditioned matrices and still assume access to the entire matrix. We propose a new approach that estimates matrix properties by extrapolating from smaller submatrices. Beginning with techniques for approximating log-determinants, we develop a full eigenvalue estimation method grounded in a principle from random matrix theory that we term "free decompression". This approach enables extremely fast eigenvalue estimation with remarkable accuracy, even for ill-conditioned matrices.
Statistics seminar
The University of Melbourne
Friday, 20 June 2025, 4:00 pm
Microsoft Teams/ Anita B. Lawrence 4082