Picture for Marco Canini

Marco Canini

KAUST

Towards a Flexible and High-Fidelity Approach to Distributed DNN Training Emulation

May 05, 2024
Figure 1 for Towards a Flexible and High-Fidelity Approach to Distributed DNN Training Emulation
Figure 2 for Towards a Flexible and High-Fidelity Approach to Distributed DNN Training Emulation
Figure 3 for Towards a Flexible and High-Fidelity Approach to Distributed DNN Training Emulation
Figure 4 for Towards a Flexible and High-Fidelity Approach to Distributed DNN Training Emulation
Viaarxiv icon

Practical Insights into Knowledge Distillation for Pre-Trained Models

Feb 22, 2024
Viaarxiv icon

Flashback: Understanding and Mitigating Forgetting in Federated Learning

Feb 08, 2024
Viaarxiv icon

Kimad: Adaptive Gradient Compression with Bandwidth Awareness

Dec 13, 2023
Figure 1 for Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Figure 2 for Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Figure 3 for Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Figure 4 for Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Viaarxiv icon

Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees

May 29, 2023
Figure 1 for Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees
Figure 2 for Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees
Figure 3 for Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees
Figure 4 for Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees
Viaarxiv icon

FilFL: Accelerating Federated Learning via Client Filtering

Feb 13, 2023
Figure 1 for FilFL: Accelerating Federated Learning via Client Filtering
Figure 2 for FilFL: Accelerating Federated Learning via Client Filtering
Figure 3 for FilFL: Accelerating Federated Learning via Client Filtering
Figure 4 for FilFL: Accelerating Federated Learning via Client Filtering
Viaarxiv icon

Resource-Efficient Federated Learning

Add code
Nov 01, 2021
Figure 1 for Resource-Efficient Federated Learning
Figure 2 for Resource-Efficient Federated Learning
Figure 3 for Resource-Efficient Federated Learning
Figure 4 for Resource-Efficient Federated Learning
Viaarxiv icon

Rethinking gradient sparsification as total error minimization

Add code
Aug 02, 2021
Figure 1 for Rethinking gradient sparsification as total error minimization
Figure 2 for Rethinking gradient sparsification as total error minimization
Figure 3 for Rethinking gradient sparsification as total error minimization
Figure 4 for Rethinking gradient sparsification as total error minimization
Viaarxiv icon

AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly

Add code
May 22, 2021
Figure 1 for AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly
Figure 2 for AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly
Figure 3 for AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly
Figure 4 for AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly
Viaarxiv icon

On the Impact of Device and Behavioral Heterogeneity in Federated Learning

Feb 15, 2021
Figure 1 for On the Impact of Device and Behavioral Heterogeneity in Federated Learning
Figure 2 for On the Impact of Device and Behavioral Heterogeneity in Federated Learning
Figure 3 for On the Impact of Device and Behavioral Heterogeneity in Federated Learning
Figure 4 for On the Impact of Device and Behavioral Heterogeneity in Federated Learning
Viaarxiv icon