Optimal neural network approximation of smooth compositional functions on sets with low intrinsic dimension
Abstract
We study approximation and statistical learning properties of deep ReLU networks under structural assumptions that mitigate the curse of dimensionality. We prove minimax-optimal uniform approximation rates for $s$-Hölder smooth functions defined on sets with low Minkowski dimension using fully connected networks with flexible width and depth, improving existing results by logarithmic factors even in classical full-dimensional settings. A key technical ingredient is a new memorization result for deep ReLU networks that enables efficient point fitting with dense architectures. We further introduce a class of compositional models in which each component function is smooth and acts on a domain of low intrinsic dimension. This framework unifies two common assumptions in the statistical learning literature, structural constraints on the target function and low dimensionality of the covariates, within a single model. We show that deep networks can approximate such functions at rates determined by the most difficult function in the composition. As an application, we derive improved convergence rates for empirical risk minimization in nonparametric regression that adapt to smoothness, compositional structure, and intrinsic dimensionality.
Growth and citations
This paper is currently showing No growth state computed yet..
Citation metrics and growth state from academic sources (e.g. Semantic Scholar). See About for details.
Cited by (0)
No citing papers yet
Papers that cite this one will appear here once data is available.
View citations page →References (0)
No references in DB yet
References for this paper will appear here once ingested.
Related papers in Statistics Theory
- On the compatibility between the spatial moments and the codomain of a real random field0 citations
- Evaluating LLMs When They Do Not Know the Answer: Statistical Evaluation of Mathematical Reasoning via Comparative Signals0 citations
- Orthogonal Approximate Message Passing Algorithms for Rectangular Spiked Matrix Models with Rotationally Invariant Noise0 citations
Growth transitions
No transitions recorded yet
Growth state transitions will appear here once computed.