Mirror Descent Methods with a Weighting Scheme for Outputs for Optimization Problems with Functional Constraints
Russian journal of nonlinear dynamics, Tome 20 (2024) no. 5, pp. 727-745.

Voir la notice de l'article provenant de la source Math-Net.Ru

This paper is devoted to new mirror descent-type methods with switching between two types of iteration points (productive and nonproductive) for constrained convex optimization problems with several convex functional (inequality-type) constraints. We propose two methods (a standard one and its modification) with a new weighting scheme for points in each iteration of methods, which assigns smaller weights to the initial points and larger weights to the most recent points. Thus, as a result, it improves the convergence rate of the proposed methods (empirically). The proposed modification makes it possible to reduce the running time of the method due to skipping some of the functional constraints at nonproductive steps. We derive bounds for the convergence rate of the proposed methods with time-varying step sizes, which show that the proposed methods are optimal from the viewpoint of lower oracle estimates. The results of some numerical experiments, which illustrate the advantages of the proposed methods for some examples, such as the best approximation problem, the Fermat – Torricelli – Steiner problem, the smallest covering ball problem, and the maximum of a finite collection of linear functions, are also presented.
Keywords: convex optimization, nonsmooth problem, problem with functional constraints, mirror descent
Mots-clés : optimal convergence rate
@article{ND_2024_20_5_a2,
     author = {M. S. Alkousa and F. S. Stonyakin and A. M. Abdo and M. M. Alcheikh},
     title = {Mirror {Descent} {Methods} with a {Weighting} {Scheme} for {Outputs} for {Optimization} {Problems} with {Functional} {Constraints}},
     journal = {Russian journal of nonlinear dynamics},
     pages = {727--745},
     publisher = {mathdoc},
     volume = {20},
     number = {5},
     year = {2024},
     language = {en},
     url = {http://geodesic.mathdoc.fr/item/ND_2024_20_5_a2/}
}
TY  - JOUR
AU  - M. S. Alkousa
AU  - F. S. Stonyakin
AU  - A. M. Abdo
AU  - M. M. Alcheikh
TI  - Mirror Descent Methods with a Weighting Scheme for Outputs for Optimization Problems with Functional Constraints
JO  - Russian journal of nonlinear dynamics
PY  - 2024
SP  - 727
EP  - 745
VL  - 20
IS  - 5
PB  - mathdoc
UR  - http://geodesic.mathdoc.fr/item/ND_2024_20_5_a2/
LA  - en
ID  - ND_2024_20_5_a2
ER  - 
%0 Journal Article
%A M. S. Alkousa
%A F. S. Stonyakin
%A A. M. Abdo
%A M. M. Alcheikh
%T Mirror Descent Methods with a Weighting Scheme for Outputs for Optimization Problems with Functional Constraints
%J Russian journal of nonlinear dynamics
%D 2024
%P 727-745
%V 20
%N 5
%I mathdoc
%U http://geodesic.mathdoc.fr/item/ND_2024_20_5_a2/
%G en
%F ND_2024_20_5_a2
M. S. Alkousa; F. S. Stonyakin; A. M. Abdo; M. M. Alcheikh. Mirror Descent Methods with a Weighting Scheme for Outputs for Optimization Problems with Functional Constraints. Russian journal of nonlinear dynamics, Tome 20 (2024) no. 5, pp. 727-745. http://geodesic.mathdoc.fr/item/ND_2024_20_5_a2/

[1] Alkousa, M., Stonyakin, F., Abdo, A., and Alcheikh, M., Optimal Convergence Rate for Mirror Descent Methods with Special Time-Varying Step Sizes Rules, , 2024, 35 pp. arXiv:2401.04754 [math.OC] | Zbl

[2] Alkousa, M. S., “On Modification of an Adaptive Stochastic Mirror Descent Algorithm for Convex Optimization Problems with Functional Constraints”, Computational Mathematics and Applications, Forum Interdiscip. Math., eds. D. Zeidan, S. Padhi, A. Burqan, P. Ueberholz, Springer, Singapore, 2020, 47–63 | DOI | MR | Zbl

[3] Bayandina, A., Dvurechensky, P., Gasnikov, A., Stonyakin, F., and Titov, A., “Mirror Descent and Convex Optimization Problems with Non-Smooth Inequality Constraints”, Large-Scale and Distributed Optimization, Lecture Notes in Math., 2227, eds. P. Giselsson, A. Rantzer, Springer, Cham, 2018, 181–213 | DOI | MR | Zbl

[4] Beck, A. and Teboulle, M., “Mirror Descent and Nonlinear Projected Subgradient Methods for Convex Optimization”, Oper. Res. Lett., 31:3 (2003), 167–175 | DOI | MR | Zbl

[5] Beck, A., Ben-Tal, A., Guttmann-Beck, N., and Tetruashvili, L., “The CoMirror Algorithm for Solving Nonsmooth Constrained Convex Problems”, Oper. Res. Lett., 38:6 (2010), 493–498 | DOI | MR | Zbl

[6] Ben-Tal, A., Margalit, T., and Nemirovski, A., “The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography”, SIAM J. Optim., 12:1 (2001), 79–108 | DOI | MR | Zbl

[7] Ben-Tal, A. and Nemirovski, A., Lectures on Modern Convex Optimization, SIAM, Philadelphia, Penn., 2001, 504 pp. | MR | Zbl

[8] Bertsekas, D. P., Constrained Optimization and Lagrange Multiplier Methods, Athena Sci., Belmont, Mass., 2014, 410 pp. | MR

[9] Ben-Tal, A. and Nemirovski, A., “Robust Truss Topology Design via Semidefinite Programming”, SIAM J. Optim., 7:4 (1997), 991–1016 | DOI | MR | Zbl

[10] Boyd, S. and Vandenberghe, L., Convex Optimization, Cambridge Univ. Press, New York, 2004, 716 pp. | MR | Zbl

[11] Chen, G. and Teboulle, M., “Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions”, SIAM J. Optim., 3:3 (1993), 538–543 | DOI | MR | Zbl

[12] Doan, T. T., Bose, S., Nguyen, D. H., and Beck, C. L., “Convergence of the Iterates in Mirror Descent Methods”, IEEE Control Syst. Lett., 3:1 (2019), 114–119 | DOI | MR

[13] Fercoq, O., Alacaoglu, A., Necoara, I., and Cevher, V., “Almost Surely Constrained Convex Optimization”, Proc. of the 36th Internat. Conf. on Machine Learning (ICML'2019, Long Beach, Calif., USA, Jun 2019), 1910–1919

[14] Juditsky A. B., Nazin A. V., Tsybakov A. B., and Vayatis, N., “Recursive Aggregation of Estimators by the Mirror Descent Algorithm with Averaging”, Probl. Inf. Transm., 41:4 (2005), 368–384 | DOI | MR | Zbl

[15] Lan, G. and Zhou, Z., “Algorithms for Stochastic Optimization with Function or Expectation Constraints”, Comput. Optim. Appl., 76:2 (2020), 461–498 | DOI | MR | Zbl

[16] Lin, Q., Ma, R., and Yang, T., “Level-Set Methods for Finite-Sum Constrained Convex Optimization”, Proc. of the 35th Internat. Conf. on Machine Learning (Stockholm, Sweden, Jul 2018), 3118–3127

[17] Nazin, A., Anulova, S., and Tremba, A., “Application of the Mirror Descent Method to Minimize Average Losses Coming by a Poisson Flow”, Proc. of the 13th European Control Conf. (Strasbourg, France, Jun 2014), 2194–2197

[18] Nazin, A. V. and Miller, B. M., “Mirror Descent Algorithm for Homogeneous Finite Controlled Markov Chains with Unknown Mean Losses”, IFAC Proc. Vol., 44:1 (2011), 12421–12426 | DOI

[19] Nemirovskii, A., “Efficient methods for large-scale convex optimization problems”, Ekonomika i Matematicheskie Metody, 15:1 (1979), 135–152 (Russian) | MR | Zbl

[20] Nemirovsky, A. and Yudin, D., Problem Complexity and Method Efficiency in Optimization, Wiley Series in Discrete Mathematics and Optimization, Wiley, New York, 1983, 404 pp. | MR | Zbl

[21] Nesterov, Yu., Lectures on Convex Optimization, Springer Optimization and Its Applications, Springer, Cham, 2018, xxiii, 589 pp. | DOI | MR | Zbl

[22] Nocedal, J. and Wright, S. J., Numerical Optimization, Springer Series in Operations Research and Financial Engineering, 2nd ed., Springer, New York, 2006, xxii, 664 pp. | MR | Zbl

[23] Dokl. Akad. Nauk SSSR, 174:1 (1967), 33–36 (Russian) | MR | Zbl

[24] Kibernetika, 3:3 (1967), 53–55 (Russian) | DOI | MR | Zbl

[25] Shpirko, S. and Nesterov, Yu., “Primal-Dual Subgradient Methods for Huge-Scale Linear Conic Problem”, SIAM J. Optim., 24:3 (2014), 1444–1457 | DOI | MR | Zbl

[26] Stonyakin, F. S., Alkousa, M. S., Stepanov, A. N., and Barinov, M. A., “Adaptive Mirror Descent Algorithms in Convex Programming Problems with Lipschitz Constraints”, Trudy Inst. Mat. i Mekh. UrO RAN, 24:2 (2018), 266–279 (Russian) | MR

[27] Stonyakin, F. S., Alkousa, M. S., Stepanov, A. N., and Titov, A. A., “Adaptive Mirror Descent Algorithms for Convex and Strongly Convex Optimization Problems with Functional Constraints”, J. Appl. Ind. Math., 13:3 (2019), 557–574 | DOI | MR | Zbl

[28] Tremba, A. and Nazin, A., “Extension of a Saddle Point Mirror Descent Algorithm with Application to Robust PageRank”, Proc. of the 52nd IEEE Conf. on Decision and Control (Firenze, Italy, Dec 2013), 3691–3696

[29] Xu, Y., “Iteration Complexity of Inexact Augmented Lagrangian Methods for Constrained Convex Programming”, Math. Program., 185:1–2 (2021), 199–244 | MR | Zbl

[30] Xu, Y., “Primal-Dual Stochastic Gradient Method for Convex Programs with Many Functional Constraints”, SIAM J. Optim., 30:2 (2020), 1664–1692 | DOI | MR | Zbl

[31] Zhu, Z., Zhang, Y., and Xia, Y., Convergence Rate of Projected Subgradient Method with Time-Varying Step-Sizes, , 2024, 4 pp.; Optim. Lett., 2024 arXiv:2402.10221 [math.OC]Short Communication