Picture for Rafael Pinot

Rafael Pinot

Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients

Add code
Sep 30, 2024
Viaarxiv icon

Overcoming the Challenges of Batch Normalization in Federated Learning

Add code
May 23, 2024
Viaarxiv icon

On the Relevance of Byzantine Robust Optimization Against Data Poisoning

Add code
May 01, 2024
Viaarxiv icon

Tackling Byzantine Clients in Federated Learning

Add code
Feb 20, 2024
Viaarxiv icon

Practical Homomorphic Aggregation for Byzantine ML

Add code
Sep 15, 2023
Viaarxiv icon

Distributed Learning with Curious and Adversarial Machines

Add code
Feb 09, 2023
Viaarxiv icon

Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity

Add code
Feb 03, 2023
Viaarxiv icon

SoK: On the Impossible Security of Very Large Foundation Models

Add code
Sep 30, 2022
Viaarxiv icon

Making Byzantine Decentralized Learning Efficient

Add code
Sep 22, 2022
Figure 1 for Making Byzantine Decentralized Learning Efficient
Figure 2 for Making Byzantine Decentralized Learning Efficient
Figure 3 for Making Byzantine Decentralized Learning Efficient
Figure 4 for Making Byzantine Decentralized Learning Efficient
Viaarxiv icon

Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis

Add code
Jun 03, 2022
Figure 1 for Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis
Figure 2 for Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis
Figure 3 for Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis
Figure 4 for Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis
Viaarxiv icon