On Universal Estimators in Learning Theory
Informatics and Automation, Function spaces, approximation theory, and nonlinear analysis, Tome 255 (2006), pp. 256-272
Voir la notice de l'article provenant de la source Math-Net.Ru
This paper addresses the problem of constructing and analyzing estimators for the regression problem in supervised learning. Recently, there has been great interest in studying universal estimators. The term “universal” means that, on the one hand, the estimator does not depend on the a priori assumption that the regression function $f_\rho$ belongs to some class $F$ from a collection of classes $\mathcal F$ and, on the other hand, the estimation error for $f_\rho$ is close to the optimal error for the class $F$. This paper is an illustration of how the general technique of constructing universal estimators, developed in the author's previous paper, can be applied in concrete situations. The setting of the problem studied in the paper has been motivated by a recent paper by Smale and Zhou. The starting point for us is a kernel $K(x,u)$ defined on $X\times \Omega$. On the base of this kernel, we build an estimator that is universal for classes defined in terms of nonlinear approximations with regard to the system $\{K(\cdot ,u)\}_{u\in \Omega }$. To construct an easily implementable estimator, we apply the relaxed greedy algorithm.
@article{TRSPY_2006_255_a19,
author = {V. N. Temlyakov},
title = {On {Universal} {Estimators} in {Learning} {Theory}},
journal = {Informatics and Automation},
pages = {256--272},
publisher = {mathdoc},
volume = {255},
year = {2006},
language = {ru},
url = {http://geodesic.mathdoc.fr/item/TRSPY_2006_255_a19/}
}
V. N. Temlyakov. On Universal Estimators in Learning Theory. Informatics and Automation, Function spaces, approximation theory, and nonlinear analysis, Tome 255 (2006), pp. 256-272. http://geodesic.mathdoc.fr/item/TRSPY_2006_255_a19/