Accelerated Stochastic ExtraGradient: Mixing Hessian and gradient similarity to reduce communication in distributed and federated learning
    
    
  
  
  
      
      
      
        
Trudy Matematicheskogo Instituta imeni V.A. Steklova, Tome 79 (2024) no. 6, pp. 939-973
    
  
  
  
  
  
    
      
      
        
      
      
      
    Voir la notice de l'article provenant de la source Math-Net.Ru
            
              			Modern realities and trends in learning require more and more generalization ability of models, which leads to an increase in both models and training sample size. It is already difficult to solve such tasks in a single device mode. This is the reason why distributed and federated learning approaches are becoming more popular every day. Distributed computing involves communication between devices, which requires solving two key problems: efficiency and privacy. One of the most well-known approaches to combat communication costs is to exploit the similarity of local data. Both Hessian similarity and homogeneous gradients have been studied in the literature, but separately. In this paper we combine both of these assumptions in analyzing a new method that incorporates the ideas of using data similarity and clients sampling. Moreover, to address privacy concerns, we apply the technique of additional noise and analyze its impact on the convergence of the proposed method. The theory is confirmed by training on real datasets.
Bibliography: 45 titles.
			
            
            
            
          
        
      
                  
                    
                    
                    
                        
Keywords: 
ASEG, distributed learning, federated learning, communication costs, Hessian similarity, homogeneous gradients
Mots-clés : technique of additional noise.
                    
                  
                
                
                Mots-clés : technique of additional noise.
@article{RM_2024_79_6_a1,
     author = {D. A. Bylinkin and K. D. Degtyarev and A. N. Beznosikov},
     title = {Accelerated {Stochastic} {ExtraGradient:} {Mixing} {Hessian} and gradient similarity to reduce communication in distributed and federated learning},
     journal = {Trudy Matematicheskogo Instituta imeni V.A. Steklova},
     pages = {939--973},
     publisher = {mathdoc},
     volume = {79},
     number = {6},
     year = {2024},
     language = {en},
     url = {http://geodesic.mathdoc.fr/item/RM_2024_79_6_a1/}
}
                      
                      
                    TY - JOUR AU - D. A. Bylinkin AU - K. D. Degtyarev AU - A. N. Beznosikov TI - Accelerated Stochastic ExtraGradient: Mixing Hessian and gradient similarity to reduce communication in distributed and federated learning JO - Trudy Matematicheskogo Instituta imeni V.A. Steklova PY - 2024 SP - 939 EP - 973 VL - 79 IS - 6 PB - mathdoc UR - http://geodesic.mathdoc.fr/item/RM_2024_79_6_a1/ LA - en ID - RM_2024_79_6_a1 ER -
%0 Journal Article %A D. A. Bylinkin %A K. D. Degtyarev %A A. N. Beznosikov %T Accelerated Stochastic ExtraGradient: Mixing Hessian and gradient similarity to reduce communication in distributed and federated learning %J Trudy Matematicheskogo Instituta imeni V.A. Steklova %D 2024 %P 939-973 %V 79 %N 6 %I mathdoc %U http://geodesic.mathdoc.fr/item/RM_2024_79_6_a1/ %G en %F RM_2024_79_6_a1
D. A. Bylinkin; K. D. Degtyarev; A. N. Beznosikov. Accelerated Stochastic ExtraGradient: Mixing Hessian and gradient similarity to reduce communication in distributed and federated learning. Trudy Matematicheskogo Instituta imeni V.A. Steklova, Tome 79 (2024) no. 6, pp. 939-973. http://geodesic.mathdoc.fr/item/RM_2024_79_6_a1/
