Presentation Name: High-dimensional funciton approximaiton: a quasi-interpolaiton perspective
Presenter: 高文武 教授
Date: 2021-07-01
Location: 14:30-15:30
Abstract:
Kolmogorov's superposition theorem related approximations decompose a high-dimensional function approximation problem into several univariate cases and thus can break the curse of dimensionality to some extent. However, most existing schemes have to solve a minimization problem to obtain the final approximant. We  propose a general approach that yields  an approximant directly without the need to solve any minimization problem under the framework of quasi-interpolation. Our final approximant takes a weighted average of the available data and directly assembles them with translations of   an appropriately selected kernel having a similar univariate structure (i.e., radial kernel, tensor-product kernel) as Kolmogorov's superposition theorem related approximants.   To derive approximation error of our approximant, we   decompose it as a sum of convolution error and discretization error by  introducing a convolution operator with respect to the  kernel. Such a decomposition (also known as bias-variance decomposition in machine learning) motivates us to adopt classical results in convolution theory and high-dimensional numerical integration to derive these two errors separately. Moreover, it provides a viewpoint of conceiving  our approximant as a regularization technique that balances a tradeoff between convolution error and discretization error.  Both theoretical approximation error analysis and numerical implementations provide evidence that the proposed approximant is robust and is capable of approximating high-dimensional functions.

海报

 

Annual Speech Directory: No.180

220 Handan Rd., Yangpu District, Shanghai ( 200433 )| Operator:+86 21 65642222

Copyright © 2016 FUDAN University. All Rights Reserved