Alert button
Picture for Alex Bie

Alex Bie

Alert button

Parametric Feature Transfer: One-shot Federated Learning with Foundation Models

Add code
Bookmark button
Alert button
Feb 02, 2024
Mahdi Beitollahi, Alex Bie, Sobhan Hemati, Leo Maxime Brunswic, Xu Li, Xi Chen, Guojun Zhang

Viaarxiv icon

Normalization Is All You Need: Understanding Layer-Normalized Federated Learning under Extreme Label Shift

Add code
Bookmark button
Alert button
Aug 18, 2023
Guojun Zhang, Mahdi Beitollahi, Alex Bie, Xi Chen

Figure 1 for Normalization Is All You Need: Understanding Layer-Normalized Federated Learning under Extreme Label Shift
Figure 2 for Normalization Is All You Need: Understanding Layer-Normalized Federated Learning under Extreme Label Shift
Figure 3 for Normalization Is All You Need: Understanding Layer-Normalized Federated Learning under Extreme Label Shift
Figure 4 for Normalization Is All You Need: Understanding Layer-Normalized Federated Learning under Extreme Label Shift
Viaarxiv icon

Private Distribution Learning with Public Data: The View from Sample Compression

Add code
Bookmark button
Alert button
Aug 14, 2023
Shai Ben-David, Alex Bie, Clément L. Canonne, Gautam Kamath, Vikrant Singhal

Viaarxiv icon

Private GANs, Revisited

Add code
Bookmark button
Alert button
Feb 06, 2023
Alex Bie, Gautam Kamath, Guojun Zhang

Figure 1 for Private GANs, Revisited
Figure 2 for Private GANs, Revisited
Figure 3 for Private GANs, Revisited
Figure 4 for Private GANs, Revisited
Viaarxiv icon

Private Estimation with Public Data

Add code
Bookmark button
Alert button
Aug 16, 2022
Alex Bie, Gautam Kamath, Vikrant Singhal

Figure 1 for Private Estimation with Public Data
Viaarxiv icon

Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence

Add code
Bookmark button
Alert button
Nov 29, 2021
Tianshi Cao, Alex Bie, Arash Vahdat, Sanja Fidler, Karsten Kreis

Figure 1 for Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence
Figure 2 for Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence
Figure 3 for Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence
Figure 4 for Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence
Viaarxiv icon

Fully Quantizing a Simplified Transformer for End-to-end Speech Recognition

Add code
Bookmark button
Alert button
Nov 09, 2019
Alex Bie, Bharat Venkitesh, Joao Monteiro, Md. Akmal Haidar, Mehdi Rezagholizadeh

Figure 1 for Fully Quantizing a Simplified Transformer for End-to-end Speech Recognition
Figure 2 for Fully Quantizing a Simplified Transformer for End-to-end Speech Recognition
Figure 3 for Fully Quantizing a Simplified Transformer for End-to-end Speech Recognition
Figure 4 for Fully Quantizing a Simplified Transformer for End-to-end Speech Recognition
Viaarxiv icon