David Krieg
Abstract
We consider the problem of recovering a complex-valued function on a set D based on a finite number of function values. The function is unknown, we only know that it belongs to a certain class F that reflects our a priori knowledge. Our theory works, for instance, if D is a compact
domain or manifold and F is a compact subset of the space of bounded functions on D. The recovery error is measured in a worst case scenario and with respect to the L2-distance. We recently obtained the following result:
If the Kolmogorov widths of F in L2 show a polynomial decay of order s>1/2, then there is a sampling-based algorithm that achieves the same rate of convergence.
The algorithm is a (weighted) least squares estimator. Surprisingly, it achieves a better rate of convergence than the famous Smolyak algorithm for classical tensor product spaces. The proof is based on concentration results for random matrices and a sparsification theorem due to
Marcus/Spielman/Srivastava which goes under the name Weaver's theorem or Kadison-Singer problem. For Hilbert classes F, the result means that algorithms using function values are asymptotically as powerful as algorithms using arbitrary linear measurements (like Fourier coefficients or values of derivatives).
Depending on time and interest, we may also address the following. What does the algorithm look like? What can be said in the case of a convergence rate s less or equal to 1/2? What results do we obtain for the tractability of the approximation problem in high dimensional settings? Are similar results possible in the p-norm with different p, like uniform approximation?
Computational Mathematics
Johannes Kepler University Linz, Austria.
Thursday, 23-March-2023 - 11:00 a.m.
RC-4082 and online (passcode: 112358)