Picture for Samuel Horváth

Samuel Horváth

Byzantine-Robust Optimization under $(L_0, L_1)$-Smoothness

Add code
Mar 12, 2026
Viaarxiv icon

Learning in the Null Space: Small Singular Values for Continual Learning

Add code
Feb 25, 2026
Viaarxiv icon

Beyond SGD, Without SVD: Proximal Subspace Iteration LoRA with Diagonal Fractional K-FAC

Add code
Feb 18, 2026
Viaarxiv icon

FlexRank: Nested Low-Rank Knowledge Decomposition for Adaptive Model Deployment

Add code
Feb 02, 2026
Viaarxiv icon

Who to Trust? Aggregating Client Knowledge in Logit-Based Federated Learning

Add code
Sep 18, 2025
Figure 1 for Who to Trust? Aggregating Client Knowledge in Logit-Based Federated Learning
Figure 2 for Who to Trust? Aggregating Client Knowledge in Logit-Based Federated Learning
Viaarxiv icon

Simple Stepsize for Quasi-Newton Methods with Global Convergence Guarantees

Add code
Aug 27, 2025
Viaarxiv icon

DES-LOC: Desynced Low Communication Adaptive Optimizers for Training Foundation Models

Add code
May 28, 2025
Viaarxiv icon

Convergence of Clipped-SGD for Convex $(L_0,L_1)$-Smooth Optimization with Heavy-Tailed Noise

Add code
May 27, 2025
Figure 1 for Convergence of Clipped-SGD for Convex $(L_0,L_1)$-Smooth Optimization with Heavy-Tailed Noise
Figure 2 for Convergence of Clipped-SGD for Convex $(L_0,L_1)$-Smooth Optimization with Heavy-Tailed Noise
Viaarxiv icon

Fishing For Cheap And Efficient Pruners At Initialization

Add code
Feb 17, 2025
Viaarxiv icon

Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis

Add code
Jan 08, 2025
Figure 1 for Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Figure 2 for Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Figure 3 for Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Figure 4 for Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Viaarxiv icon