Bayesian model selection and the concentration of the posterior of hyperparameters
Fundamentalʹnaâ i prikladnaâ matematika, Tome 18 (2013) no. 2, pp. 13-34.

Voir la notice de l'article provenant de la source Math-Net.Ru

The present paper offers a construction of a hyperprior that can be used for Bayesian model selection. This construction is inspired by the idea of the unbiased model selection in a penalized maximum likelihood approach. The main result shows a one-sided contraction of the posterior: the posterior mass is allocated on models of lower complexity than the oracle one.
@article{FPM_2013_18_2_a1,
     author = {N. P. Baldin and V. G. Spokoiny},
     title = {Bayesian model selection and the concentration of the posterior of hyperparameters},
     journal = {Fundamentalʹna\^a i prikladna\^a matematika},
     pages = {13--34},
     publisher = {mathdoc},
     volume = {18},
     number = {2},
     year = {2013},
     language = {ru},
     url = {http://geodesic.mathdoc.fr/item/FPM_2013_18_2_a1/}
}
TY  - JOUR
AU  - N. P. Baldin
AU  - V. G. Spokoiny
TI  - Bayesian model selection and the concentration of the posterior of hyperparameters
JO  - Fundamentalʹnaâ i prikladnaâ matematika
PY  - 2013
SP  - 13
EP  - 34
VL  - 18
IS  - 2
PB  - mathdoc
UR  - http://geodesic.mathdoc.fr/item/FPM_2013_18_2_a1/
LA  - ru
ID  - FPM_2013_18_2_a1
ER  - 
%0 Journal Article
%A N. P. Baldin
%A V. G. Spokoiny
%T Bayesian model selection and the concentration of the posterior of hyperparameters
%J Fundamentalʹnaâ i prikladnaâ matematika
%D 2013
%P 13-34
%V 18
%N 2
%I mathdoc
%U http://geodesic.mathdoc.fr/item/FPM_2013_18_2_a1/
%G ru
%F FPM_2013_18_2_a1
N. P. Baldin; V. G. Spokoiny. Bayesian model selection and the concentration of the posterior of hyperparameters. Fundamentalʹnaâ i prikladnaâ matematika, Tome 18 (2013) no. 2, pp. 13-34. http://geodesic.mathdoc.fr/item/FPM_2013_18_2_a1/

[1] Akaike H., “Information theory and an extension of the maximum likelihood principle”, Second International Symposium on Information Theory (Tsahkadsor, 1971), Akadémiai Kiadó, Budapest, 1973, 267–281 | MR

[2] Birgé L., Massart P., “Gaussian model selection”, J. Eur. Math. Soc., 3:3 (2001), 203–268 | DOI | MR | Zbl

[3] Birgé L., Massart P., “Minimal penalties for Gaussian model selection”, Probab. Theory Relat. Fields, 138:1–2 (2007), 33–73 | DOI | MR | Zbl

[4] Cavalier L., Golubev Y., “Risk hull method and regularization by projections of ill-posed inverse problems”, Ann. Statist., 34:4 (2006), 1653–1677 | DOI | MR | Zbl

[5] Donoho D. L., Johnstone I. M., “Adapting to unknown smoothness via wavelet shrinkage”, J. Am. Statist. Assoc., 90:432 (1995), 1200–1224 | DOI | MR | Zbl

[6] Efrojmovich S., Pinsker M., “Learning algorithm for nonparametric filtering”, Autom. Remote Control, 45:11 (1984), 1434–1440 | MR | Zbl

[7] Engl H. W., Hanke M., Neubauer A., Regularization of Inverse Problems, Kluwer Academic, Dordrecht, 1996 | MR | Zbl

[8] Green P. J., Silverman B., Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach, Chapman Hall, London, 1994 | MR | Zbl

[9] Petrone S., Rousseau J., Scricciolo C., “Bayes and empirical bayes: do they merge?”, Biometrika, 99:1 (2012), 1–21 | DOI

[10] Spokoiny V., “Parametric estimation. Finite sample theory”, Ann. Statist., 40:6 (2012), 2877–2909 ; arXiv: 1111.3029 | DOI | MR

[11] Spokoiny V., Basics of Modern Parametric Statistics, Springer, Berlin, 2013

[12] Stein C. M., “Estimation of the mean of a multivariate normal distribution”, Ann. Statist., 9:6 (1981), 1135–1151 | DOI | MR | Zbl

[13] Van der Vaart A. W., Asymptotic Statistics, Cambridge Ser. Statist. Probab. Math., Cambridge Univ. Press, Cambridge, 1998 | MR