Picture for Kevin Scaman

Kevin Scaman

DI-ENS

Unbiased Approximate Vector-Jacobian Products for Efficient Backpropagation

Add code
Feb 16, 2026
Viaarxiv icon

Variance-Reduced $(\varepsilon,δ)-$Unlearning using Forget Set Gradients

Add code
Feb 16, 2026
Viaarxiv icon

Adaptive collaboration for online personalized distributed learning with heterogeneous clients

Add code
Jul 09, 2025
Viaarxiv icon

When to Forget? Complexity Trade-offs in Machine Unlearning

Add code
Feb 24, 2025
Viaarxiv icon

Random Sparse Lifts: Construction, Analysis and Convergence of finite sparse networks

Add code
Jan 10, 2025
Figure 1 for Random Sparse Lifts: Construction, Analysis and Convergence of finite sparse networks
Figure 2 for Random Sparse Lifts: Construction, Analysis and Convergence of finite sparse networks
Figure 3 for Random Sparse Lifts: Construction, Analysis and Convergence of finite sparse networks
Figure 4 for Random Sparse Lifts: Construction, Analysis and Convergence of finite sparse networks
Viaarxiv icon

In-depth Analysis of Low-rank Matrix Factorisation in a Federated Setting

Add code
Sep 13, 2024
Figure 1 for In-depth Analysis of Low-rank Matrix Factorisation in a Federated Setting
Figure 2 for In-depth Analysis of Low-rank Matrix Factorisation in a Federated Setting
Figure 3 for In-depth Analysis of Low-rank Matrix Factorisation in a Federated Setting
Figure 4 for In-depth Analysis of Low-rank Matrix Factorisation in a Federated Setting
Viaarxiv icon

Generalization Error of First-Order Methods for Statistical Learning with Generic Oracles

Add code
Jul 11, 2023
Viaarxiv icon

Convergence beyond the over-parameterized regime using Rayleigh quotients

Add code
Jan 19, 2023
Viaarxiv icon

Tight High Probability Bounds for Linear Stochastic Approximation with Fixed Stepsize

Add code
Jun 02, 2021
Viaarxiv icon

Lipschitz Normalization for Self-Attention Layers with Application to Graph Neural Networks

Add code
Mar 08, 2021
Figure 1 for Lipschitz Normalization for Self-Attention Layers with Application to Graph Neural Networks
Figure 2 for Lipschitz Normalization for Self-Attention Layers with Application to Graph Neural Networks
Figure 3 for Lipschitz Normalization for Self-Attention Layers with Application to Graph Neural Networks
Figure 4 for Lipschitz Normalization for Self-Attention Layers with Application to Graph Neural Networks
Viaarxiv icon