Alert button
Picture for Guillaume Leclerc

Guillaume Leclerc

Alert button

Rethinking Backdoor Attacks

Jul 19, 2023
Alaa Khaddaj, Guillaume Leclerc, Aleksandar Makelov, Kristian Georgiev, Hadi Salman, Andrew Ilyas, Aleksander Madry

Figure 1 for Rethinking Backdoor Attacks
Figure 2 for Rethinking Backdoor Attacks
Figure 3 for Rethinking Backdoor Attacks
Figure 4 for Rethinking Backdoor Attacks
Viaarxiv icon

FFCV: Accelerating Training by Removing Data Bottlenecks

Jun 21, 2023
Guillaume Leclerc, Andrew Ilyas, Logan Engstrom, Sung Min Park, Hadi Salman, Aleksander Madry

Figure 1 for FFCV: Accelerating Training by Removing Data Bottlenecks
Figure 2 for FFCV: Accelerating Training by Removing Data Bottlenecks
Figure 3 for FFCV: Accelerating Training by Removing Data Bottlenecks
Figure 4 for FFCV: Accelerating Training by Removing Data Bottlenecks
Viaarxiv icon

TRAK: Attributing Model Behavior at Scale

Apr 03, 2023
Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, Aleksander Madry

Figure 1 for TRAK: Attributing Model Behavior at Scale
Figure 2 for TRAK: Attributing Model Behavior at Scale
Figure 3 for TRAK: Attributing Model Behavior at Scale
Figure 4 for TRAK: Attributing Model Behavior at Scale
Viaarxiv icon

Raising the Cost of Malicious AI-Powered Image Editing

Feb 13, 2023
Hadi Salman, Alaa Khaddaj, Guillaume Leclerc, Andrew Ilyas, Aleksander Madry

Figure 1 for Raising the Cost of Malicious AI-Powered Image Editing
Figure 2 for Raising the Cost of Malicious AI-Powered Image Editing
Figure 3 for Raising the Cost of Malicious AI-Powered Image Editing
Figure 4 for Raising the Cost of Malicious AI-Powered Image Editing
Viaarxiv icon

Adversarially trained neural representations may already be as robust as corresponding biological neural representations

Jun 19, 2022
Chong Guo, Michael J. Lee, Guillaume Leclerc, Joel Dapello, Yug Rao, Aleksander Madry, James J. DiCarlo

Figure 1 for Adversarially trained neural representations may already be as robust as corresponding biological neural representations
Figure 2 for Adversarially trained neural representations may already be as robust as corresponding biological neural representations
Figure 3 for Adversarially trained neural representations may already be as robust as corresponding biological neural representations
Figure 4 for Adversarially trained neural representations may already be as robust as corresponding biological neural representations
Viaarxiv icon

Datamodels: Predicting Predictions from Training Data

Feb 01, 2022
Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, Aleksander Madry

Figure 1 for Datamodels: Predicting Predictions from Training Data
Figure 2 for Datamodels: Predicting Predictions from Training Data
Figure 3 for Datamodels: Predicting Predictions from Training Data
Figure 4 for Datamodels: Predicting Predictions from Training Data
Viaarxiv icon

3DB: A Framework for Debugging Computer Vision Models

Jun 07, 2021
Guillaume Leclerc, Hadi Salman, Andrew Ilyas, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, Ashish Kapoor, Aleksander Madry

Figure 1 for 3DB: A Framework for Debugging Computer Vision Models
Figure 2 for 3DB: A Framework for Debugging Computer Vision Models
Figure 3 for 3DB: A Framework for Debugging Computer Vision Models
Figure 4 for 3DB: A Framework for Debugging Computer Vision Models
Viaarxiv icon

Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy

Feb 26, 2020
Aditya Saligrama, Guillaume Leclerc

Figure 1 for Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy
Figure 2 for Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy
Viaarxiv icon

The Two Regimes of Deep Network Training

Feb 24, 2020
Guillaume Leclerc, Aleksander Madry

Figure 1 for The Two Regimes of Deep Network Training
Figure 2 for The Two Regimes of Deep Network Training
Figure 3 for The Two Regimes of Deep Network Training
Figure 4 for The Two Regimes of Deep Network Training
Viaarxiv icon