Despite several years of empirical success with deep learning for large-scale scientific and engineering problems, existing theoretical frameworks fail to explain many of even the most successful heuristics used by practitioners for designing effective neural network models. The primary weakness most approaches encounter is a reliance on the typical large data regime, which neural networks often do not operate in due to their large size. To overcome this issue, I will show that for any overparameterized (high-dimensional) model, there exists a dual underparameterized (low-dimensional) model that possesses the same marginal likelihood, establishing a form of Bayesian duality. Applying classical methods to this dual model reveals the Interpolating Information Criterion, a measure of model quality that is consistent with current deep learning heuristics.


Liam Hodgkinson 

Research Area

Statistics seminar


University of Melbourne


Friday, 23 February 2024, 4:00 pm


Microsoft Teams