CMX Lunch Seminar
We consider the problem of approximating high dimensional functions using shallow neural networks. We begin by introducing natural spaces of functions which can be efficiently approximated by such networks. Then, we derive the metric entropy of the unit balls in these spaces. Drawing upon recent work connecting stable approximation rates to metric entropy, this leads to the optimal approximation rates for the given spaces. Next, we show that higher approximation rates can be obtained by further restricting the function class. In particular, for a restrictive but natural space of functions, shallow networks with ReLU$^k$ activation function achieve an approximation rate of $O(n^{-(k+1)})$ in every dimension. Finally, we discuss the connections between this surprising result and the finite element method.