Alert button
Picture for Mher Safaryan

Mher Safaryan

Alert button

AsGrad: A Sharp Unified Analysis of Asynchronous-SGD Algorithms

Add code
Bookmark button
Alert button
Oct 31, 2023
Rustem Islamov, Mher Safaryan, Dan Alistarh

Viaarxiv icon

Knowledge Distillation Performs Partial Variance Reduction

Add code
Bookmark button
Alert button
May 27, 2023
Mher Safaryan, Alexandra Peste, Dan Alistarh

Figure 1 for Knowledge Distillation Performs Partial Variance Reduction
Figure 2 for Knowledge Distillation Performs Partial Variance Reduction
Figure 3 for Knowledge Distillation Performs Partial Variance Reduction
Figure 4 for Knowledge Distillation Performs Partial Variance Reduction
Viaarxiv icon

GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity

Add code
Bookmark button
Alert button
Oct 28, 2022
Artavazd Maranjyan, Mher Safaryan, Peter Richtárik

Figure 1 for GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
Figure 2 for GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
Figure 3 for GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
Viaarxiv icon

Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation

Add code
Bookmark button
Alert button
Jun 07, 2022
Rustem Islamov, Xun Qian, Slavomír Hanzely, Mher Safaryan, Peter Richtárik

Figure 1 for Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Figure 2 for Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Figure 3 for Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Figure 4 for Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Viaarxiv icon

Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning

Add code
Bookmark button
Alert button
Nov 02, 2021
Xun Qian, Rustem Islamov, Mher Safaryan, Peter Richtárik

Figure 1 for Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Figure 2 for Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Figure 3 for Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Figure 4 for Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Viaarxiv icon

Smoothness-Aware Quantization Techniques

Add code
Bookmark button
Alert button
Jun 07, 2021
Bokun Wang, Mher Safaryan, Peter Richtárik

Figure 1 for Smoothness-Aware Quantization Techniques
Figure 2 for Smoothness-Aware Quantization Techniques
Figure 3 for Smoothness-Aware Quantization Techniques
Figure 4 for Smoothness-Aware Quantization Techniques
Viaarxiv icon

FedNL: Making Newton-Type Methods Applicable to Federated Learning

Add code
Bookmark button
Alert button
Jun 05, 2021
Mher Safaryan, Rustem Islamov, Xun Qian, Peter Richtárik

Figure 1 for FedNL: Making Newton-Type Methods Applicable to Federated Learning
Figure 2 for FedNL: Making Newton-Type Methods Applicable to Federated Learning
Figure 3 for FedNL: Making Newton-Type Methods Applicable to Federated Learning
Figure 4 for FedNL: Making Newton-Type Methods Applicable to Federated Learning
Viaarxiv icon

Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization

Add code
Bookmark button
Alert button
Feb 14, 2021
Mher Safaryan, Filip Hanzely, Peter Richtárik

Figure 1 for Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
Figure 2 for Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
Figure 3 for Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
Figure 4 for Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
Viaarxiv icon

Optimal Gradient Compression for Distributed and Federated Learning

Add code
Bookmark button
Alert button
Oct 07, 2020
Alyazeed Albasyoni, Mher Safaryan, Laurent Condat, Peter Richtárik

Figure 1 for Optimal Gradient Compression for Distributed and Federated Learning
Figure 2 for Optimal Gradient Compression for Distributed and Federated Learning
Figure 3 for Optimal Gradient Compression for Distributed and Federated Learning
Figure 4 for Optimal Gradient Compression for Distributed and Federated Learning
Viaarxiv icon

On Biased Compression for Distributed Learning

Add code
Bookmark button
Alert button
Feb 27, 2020
Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, Mher Safaryan

Figure 1 for On Biased Compression for Distributed Learning
Figure 2 for On Biased Compression for Distributed Learning
Figure 3 for On Biased Compression for Distributed Learning
Figure 4 for On Biased Compression for Distributed Learning
Viaarxiv icon