Picture for Alexandre Allauzen

Alexandre Allauzen

Miles Team, LAMSADE, Université Paris Dauphine - PSL, Paris, France, ESPCI PSL, Paris, France

Identifying and typifying demographic unfairness in phoneme-level embeddings of self-supervised speech recognition models

Add code
Apr 24, 2026
Viaarxiv icon

Where Do Self-Supervised Speech Models Become Unfair?

Add code
Apr 20, 2026
Viaarxiv icon

Polynomial Mixing for Efficient Self-supervised Speech Encoders

Add code
Feb 28, 2026
Viaarxiv icon

Forward Only Learning for Orthogonal Neural Networks of any Depth

Add code
Dec 19, 2025
Figure 1 for Forward Only Learning for Orthogonal Neural Networks of any Depth
Figure 2 for Forward Only Learning for Orthogonal Neural Networks of any Depth
Figure 3 for Forward Only Learning for Orthogonal Neural Networks of any Depth
Figure 4 for Forward Only Learning for Orthogonal Neural Networks of any Depth
Viaarxiv icon

On the MIA Vulnerability Gap Between Private GANs and Diffusion Models

Add code
Sep 03, 2025
Viaarxiv icon

Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics

Add code
Jul 03, 2025
Figure 1 for Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
Figure 2 for Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
Figure 3 for Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
Figure 4 for Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
Viaarxiv icon

Bridging the Theoretical Gap in Randomized Smoothing

Add code
Apr 03, 2025
Viaarxiv icon

Fast Training of Recurrent Neural Networks with Stationary State Feedbacks

Add code
Mar 29, 2025
Figure 1 for Fast Training of Recurrent Neural Networks with Stationary State Feedbacks
Figure 2 for Fast Training of Recurrent Neural Networks with Stationary State Feedbacks
Figure 3 for Fast Training of Recurrent Neural Networks with Stationary State Feedbacks
Figure 4 for Fast Training of Recurrent Neural Networks with Stationary State Feedbacks
Viaarxiv icon

SCOPE: A Self-supervised Framework for Improving Faithfulness in Conditional Text Generation

Add code
Feb 19, 2025
Viaarxiv icon

Conditional Distribution Quantization in Machine Learning

Add code
Feb 11, 2025
Figure 1 for Conditional Distribution Quantization in Machine Learning
Figure 2 for Conditional Distribution Quantization in Machine Learning
Figure 3 for Conditional Distribution Quantization in Machine Learning
Figure 4 for Conditional Distribution Quantization in Machine Learning
Viaarxiv icon