Improved maximum noise level estimation in black-box optimization problems
Zapiski Nauchnykh Seminarov POMI, Investigations on applied mathematics and informatics. Part IV, Tome 540 (2024), pp. 132-147 Cet article a éte moissonné depuis la source Math-Net.Ru

Voir la notice du chapitre de livre

In black-box optimization, accurately estimating the maximum noise level is crucial for robust performance. In this work, we propose a novel approach for improving maximum noise level estimation, focusing on scenarios where only function values (possibly with bounded adversarial noise) are available. Leveraging gradient-free optimization algorithms, we introduce a new noise constraint based on the Lipschitz assumption, enhancing the noise level estimate (or improving error floor) for non-smooth and convex functions. Theoretical analysis and numerical experiments demonstrate the effectiveness of our approach, even for smooth and convex functions. This advancement contributes to enhancing the robustness and efficiency of black-box optimization algorithms in diverse domains such as machine learning and engineering design, where adversarial noise presents a significant challenge.
@article{ZNSL_2024_540_a6,
     author = {A. Lobanov and A. Gasnikov},
     title = {Improved maximum noise level estimation in black-box optimization problems},
     journal = {Zapiski Nauchnykh Seminarov POMI},
     pages = {132--147},
     year = {2024},
     volume = {540},
     language = {en},
     url = {http://geodesic.mathdoc.fr/item/ZNSL_2024_540_a6/}
}
TY  - JOUR
AU  - A. Lobanov
AU  - A. Gasnikov
TI  - Improved maximum noise level estimation in black-box optimization problems
JO  - Zapiski Nauchnykh Seminarov POMI
PY  - 2024
SP  - 132
EP  - 147
VL  - 540
UR  - http://geodesic.mathdoc.fr/item/ZNSL_2024_540_a6/
LA  - en
ID  - ZNSL_2024_540_a6
ER  - 
%0 Journal Article
%A A. Lobanov
%A A. Gasnikov
%T Improved maximum noise level estimation in black-box optimization problems
%J Zapiski Nauchnykh Seminarov POMI
%D 2024
%P 132-147
%V 540
%U http://geodesic.mathdoc.fr/item/ZNSL_2024_540_a6/
%G en
%F ZNSL_2024_540_a6
A. Lobanov; A. Gasnikov. Improved maximum noise level estimation in black-box optimization problems. Zapiski Nauchnykh Seminarov POMI, Investigations on applied mathematics and informatics. Part IV, Tome 540 (2024), pp. 132-147. http://geodesic.mathdoc.fr/item/ZNSL_2024_540_a6/

[1] A. Gasnikov, A. Novitskii, V. Novitskii, F. Abdukhakimov, D. Kamzolov, A. Beznosikov, M. Takac, P. Dvurechensky, and B. Gu, “The power of first-order smooth optimization for black-box non-smooth problems”, International Conference on Machine Learning, PMLR, 2022, 7241–7265

[2] A. Lobanov, B. Alashqar, D. Dvinskikh, and A. Gasnikov, Gradient-Free Federated Learning Methods with $ l_1 $ and $ l_2 $-Randomization for Non-Smooth Convex Stochastic Optimization Problems, 2022, arXiv: 2211.10783 | MR

[3] A.D. Flaxman, A.T. Kalai, and H.B. McMahan, “Online convex optimization in the bandit setting: gradient descent without a gradient”, Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2005, 385–394 | MR | Zbl

[4] P. Dvurechensky, E. Gorbunov, and A. Gasnikov, “An accelerated directional derivative method for smooth stochastic convex optimization”, Eur. J. Oper. Res., 290:2 (2021), 601–621 | DOI | MR | Zbl

[5] N. Kornilov, O. Shamir, A. Lobanov, D. Dvinskikh, A. Gasnikov, I. Shibaev, E. Gorbunov, and S. Horváth, Accelerated zeroth-order method for non-smooth stochastic convex optimization problem with infinite variance, Advances in Neural Information Processing Systems, 36, 2024

[6] A. Lobanov, “Stochastic adversarial noise in the “black box” optimization problem”, International Conference on Optimization and Applications, Springer, 2023, 60–71 | DOI | MR | Zbl

[7] I.A. Kuruzov, F.S. Stonyakin, and M.S. Alkousa, “Gradient-type methods for optimization problems with Polyak-Łojasiewicz condition: Early stopping and adaptivity to inexactness parameter”, International Conference on Optimization and Applications, Springer, 2022, 18–32 | MR | Zbl

[8] A. Lobanov, A. Gasnikov, and F. Stonyakin, Highly smoothness zero-order methods for solving optimization problems under PL condition, 2023, arXiv: 2305.15828 | MR

[9] S. Lojasiewicz, “Une propriété topologique des sous-ensembles analytiques réels”, Les équations aux dérivées partielles, 117, 1963, 87–89 | MR | Zbl

[10] B.T. Polyak, “Gradient methods for the minimisation of functionals”, USSR Comput. Math. Math. Phys., 3:4 (1963), 864–878 | DOI | MR | Zbl

[11] P. Yue, C. Fang, and Z. Lin, “On the lower bound of minimizing Polyak-Łojasiewicz functions”, The Thirty Sixth Annual Conference on Learning Theory, PMLR, 2023, 2948–2968

[12] R. Dang-Nhu, G. Singh, P. Bielik, and M. Vechev, “Adversarial attacks on probabilistic autoregressive forecasting models”, International Conference on Machine Learning, PMLR, 2020, 2356–2365

[13] H. Rosenbrock, “An automatic method for finding the greatest or least value of a function”, Comput. J., 3:3 (1960), 175–184 | DOI | MR

[14] C. Audet and W. Hare, Derivative-free and blackbox optimization, Springer, 2017 | MR | Zbl

[15] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z.B. Celik, and A. Swami, “Practical black-box attacks against machine learning”, Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, 2017, 506–519

[16] J. Gao, J. Lanchantin, M.L. Soffa, and Y. Qi, “Black-box generation of adversarial text sequences to evade deep learning classifiers”, 2018 IEEE Security and Privacy Workshops (SPW), 2018, 50–56 | DOI | Zbl

[17] H. Mania, A. Guy, and B. Recht, Simple random search of static linear policies is competitive for reinforcement learning, Advances in Neural Information Processing Systems, 31, 2018

[18] K.K. Patel, A. Saha, L. Wang, and N. Srebro, “Distributed Online and Bandit Convex Optimization”, OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 2022

[19] A. Akhavan, E. Chzhen, M. Pontil, and A. Tsybakov, “A gradient estimator via $L_1$-randomization for online zero-order optimization with two point feedback”, Advances in Neural Information Processing Systems, 35, 2022, 7685–7696 | MR

[20] O. Shamir, “An optimal algorithm for bandit and zero-order convex optimization with two-point feedback”, J. Mach. Learn. Res., 18:1 (2017), 1703–1713 | MR

[21] T. Lattimore and A. Gyorgy, “Improved regret for zeroth-order stochastic convex bandits”, Conference on Learning Theory, PMLR, 2021, 2938–2964

[22] A. Nguyen and K. Balasubramanian, “Stochastic Zeroth-Order Functional Constrained Optimization: Oracle Complexity and Applications”, INFORMS J. Optim., 2022 | MR

[23] A. Lobanov, A. Gasnikov, and A. Krasnov, The Order Oracle: A New Concept in The Black Box Optimization Problems, 2024, arXiv: 2402.09014 | MR

[24] A. Gasnikov, D. Dvinskikh, P. Dvurechensky, E. Gorbunov, A. Beznosikov, and A. Lobanov, Randomized gradient-free methods in convex optimization, 2022, arXiv: 2211.13566

[25] F. Bach and V. Perchet, “Highly-smooth zero-th order online optimization”, Conference on Learning Theory, PMLR, 2016, 257–283

[26] A. Akhavan, M. Pontil, and A. Tsybakov, “Exploiting higher order smoothness in derivative-free optimization and continuous bandits”, Advances in Neural Information Processing Systems, 33, 2020, 9017–9027

[27] A. Lobanov, N. Bashirov, and A. Gasnikov, The Black-Box Optimization Problem: Zero-Order Accelerated Stochastic Method via Kernel Approximation, 2023, arXiv: 2310.02371 | MR

[28] A. Lobanov and A. Gasnikov, “Accelerated zero-order SGD method for solving the black box optimization problem under “overparametrization” condition”, International Conference on Optimization and Applications, Springer, 2023, 72–83 | DOI | MR | Zbl

[29] A. Akhavan, M. Pontil, and A. Tsybakov, “Distributed zero-order optimization under adversarial noise”, Advances in Neural Information Processing Systems, 34, 2021, 10209–10220

[30] D. Dvinskikh, V. Tominin, I. Tominin, and A. Gasnikov, “Noisy zeroth-order optimization for non-smooth saddle point problems”, International Conference on Mathematical Optimization Theory and Operations Research, Springer, 2022, 18–33 | DOI | MR | Zbl

[31] A. Lobanov, A. Anikin, A. Gasnikov, A. Gornov, and S. Chukanov, “Zero-order stochastic conditional gradient sliding method for non-smooth convex optimization”, International Conference on Mathematical Optimization Theory and Operations Research, Springer, 2023, 92–106 | MR

[32] N. Kornilov, A. Gasnikov, P. Dvurechensky, and D. Dvinskikh, “Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact”, Comput. Manag. Sci., 20:1 (2023), 37 | DOI | MR | Zbl