Abstract

In the first part of the talk we show that under some widely believed assumptions, there are no higher-order algorithms for basic tasks in computational mathematics such as: Computing integrals with neural network integrands, computing solutions of a Poisson equation with neural network source term, and computing the matrix-vector product with a neural network encoded matrix. We demonstrate sharpness of our results by providing fast quadrature algorithms for one-layer networks and giving numerical evidence that quasi-Monte Carlo methods achieve the best possible order of convergence for quadrature with neural networks.

In the second part of the talk we introduce an iterative neural network construction based on the Banach Fixed Point Theorem, which, under certain assumptions, provides approximation results that do not rely on the smoothness of the underlying maps. This approach not only establishes a rigorous theoretical foundation but can also be used to improve traditional gradient descent methods by alternating fixed point iterations with gradient descent steps to speed up convergence. This construction is not limited to finite-dimensional spaces and promises insights for neural operators as well.

Speaker

Fabian Zehetgruber

Research Area

Computational Mathematics

Affiliation

TU Wien, Austria

Date

Tue, 10/Mar / 2026 - 10:00 am.

Venue

Anita B. Lawrence-4082 and online (passcode: 112358)