Picture for Atul Prakash

Atul Prakash

Coverage-centric Coreset Selection for High Pruning Rates

Add code
Oct 28, 2022
Figure 1 for Coverage-centric Coreset Selection for High Pruning Rates
Figure 2 for Coverage-centric Coreset Selection for High Pruning Rates
Figure 3 for Coverage-centric Coreset Selection for High Pruning Rates
Figure 4 for Coverage-centric Coreset Selection for High Pruning Rates
Viaarxiv icon

Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing

Add code
May 18, 2022
Figure 1 for Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing
Figure 2 for Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing
Figure 3 for Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing
Figure 4 for Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing
Viaarxiv icon

Concept-based Explanations for Out-Of-Distribution Detectors

Add code
Mar 04, 2022
Figure 1 for Concept-based Explanations for Out-Of-Distribution Detectors
Figure 2 for Concept-based Explanations for Out-Of-Distribution Detectors
Figure 3 for Concept-based Explanations for Out-Of-Distribution Detectors
Figure 4 for Concept-based Explanations for Out-Of-Distribution Detectors
Viaarxiv icon

Towards Adversarially Robust Deepfake Detection: An Ensemble Approach

Add code
Feb 11, 2022
Figure 1 for Towards Adversarially Robust Deepfake Detection: An Ensemble Approach
Figure 2 for Towards Adversarially Robust Deepfake Detection: An Ensemble Approach
Figure 3 for Towards Adversarially Robust Deepfake Detection: An Ensemble Approach
Figure 4 for Towards Adversarially Robust Deepfake Detection: An Ensemble Approach
Viaarxiv icon

Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples

Add code
Jul 01, 2021
Figure 1 for Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples
Figure 2 for Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples
Figure 3 for Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples
Figure 4 for Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples
Viaarxiv icon

Essential Features: Reducing the Attack Surface of Adversarial Perturbations with Robust Content-Aware Image Preprocessing

Add code
Dec 03, 2020
Figure 1 for Essential Features: Reducing the Attack Surface of Adversarial Perturbations with Robust Content-Aware Image Preprocessing
Figure 2 for Essential Features: Reducing the Attack Surface of Adversarial Perturbations with Robust Content-Aware Image Preprocessing
Figure 3 for Essential Features: Reducing the Attack Surface of Adversarial Perturbations with Robust Content-Aware Image Preprocessing
Figure 4 for Essential Features: Reducing the Attack Surface of Adversarial Perturbations with Robust Content-Aware Image Preprocessing
Viaarxiv icon

Understanding and Diagnosing Vulnerability under Adversarial Attacks

Add code
Jul 17, 2020
Figure 1 for Understanding and Diagnosing Vulnerability under Adversarial Attacks
Figure 2 for Understanding and Diagnosing Vulnerability under Adversarial Attacks
Figure 3 for Understanding and Diagnosing Vulnerability under Adversarial Attacks
Figure 4 for Understanding and Diagnosing Vulnerability under Adversarial Attacks
Viaarxiv icon

Towards Robustness against Unsuspicious Adversarial Examples

Add code
May 08, 2020
Figure 1 for Towards Robustness against Unsuspicious Adversarial Examples
Figure 2 for Towards Robustness against Unsuspicious Adversarial Examples
Figure 3 for Towards Robustness against Unsuspicious Adversarial Examples
Figure 4 for Towards Robustness against Unsuspicious Adversarial Examples
Viaarxiv icon

MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation

Add code
May 06, 2020
Figure 1 for MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Figure 2 for MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Figure 3 for MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Figure 4 for MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Viaarxiv icon

Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification

Add code
Feb 17, 2020
Figure 1 for Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
Figure 2 for Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
Figure 3 for Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
Figure 4 for Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
Viaarxiv icon