Picture for Soheil Feizi

Soheil Feizi

Provable Adversarial Robustness for Fractional Lp Threat Models

Add code
Mar 16, 2022
Figure 1 for Provable Adversarial Robustness for Fractional Lp Threat Models
Figure 2 for Provable Adversarial Robustness for Fractional Lp Threat Models
Figure 3 for Provable Adversarial Robustness for Fractional Lp Threat Models
Figure 4 for Provable Adversarial Robustness for Fractional Lp Threat Models
Viaarxiv icon

Understanding Failure Modes of Self-Supervised Learning

Add code
Mar 03, 2022
Figure 1 for Understanding Failure Modes of Self-Supervised Learning
Figure 2 for Understanding Failure Modes of Self-Supervised Learning
Figure 3 for Understanding Failure Modes of Self-Supervised Learning
Figure 4 for Understanding Failure Modes of Self-Supervised Learning
Viaarxiv icon

Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation

Add code
Feb 05, 2022
Figure 1 for Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
Figure 2 for Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
Figure 3 for Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
Figure 4 for Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
Viaarxiv icon

Certifying Model Accuracy under Distribution Shifts

Add code
Jan 28, 2022
Figure 1 for Certifying Model Accuracy under Distribution Shifts
Figure 2 for Certifying Model Accuracy under Distribution Shifts
Figure 3 for Certifying Model Accuracy under Distribution Shifts
Figure 4 for Certifying Model Accuracy under Distribution Shifts
Viaarxiv icon

A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes

Add code
Jan 26, 2022
Figure 1 for A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes
Figure 2 for A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes
Figure 3 for A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes
Figure 4 for A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes
Viaarxiv icon

Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses

Add code
Dec 12, 2021
Figure 1 for Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses
Figure 2 for Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses
Figure 3 for Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses
Figure 4 for Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses
Viaarxiv icon

Mutual Adversarial Training: Learning together is better than going alone

Add code
Dec 09, 2021
Figure 1 for Mutual Adversarial Training: Learning together is better than going alone
Figure 2 for Mutual Adversarial Training: Learning together is better than going alone
Figure 3 for Mutual Adversarial Training: Learning together is better than going alone
Figure 4 for Mutual Adversarial Training: Learning together is better than going alone
Viaarxiv icon

Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection

Add code
Dec 08, 2021
Figure 1 for Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection
Figure 2 for Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection
Figure 3 for Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection
Figure 4 for Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection
Viaarxiv icon

Improving Deep Learning Interpretability by Saliency Guided Training

Add code
Nov 29, 2021
Figure 1 for Improving Deep Learning Interpretability by Saliency Guided Training
Figure 2 for Improving Deep Learning Interpretability by Saliency Guided Training
Figure 3 for Improving Deep Learning Interpretability by Saliency Guided Training
Figure 4 for Improving Deep Learning Interpretability by Saliency Guided Training
Viaarxiv icon

On Hard Episodes in Meta-Learning

Add code
Oct 21, 2021
Figure 1 for On Hard Episodes in Meta-Learning
Figure 2 for On Hard Episodes in Meta-Learning
Figure 3 for On Hard Episodes in Meta-Learning
Figure 4 for On Hard Episodes in Meta-Learning
Viaarxiv icon