Picture for Atul Prakash

Atul Prakash

Efficient Adversarial Training with Transferable Adversarial Examples

Add code
Dec 27, 2019
Figure 1 for Efficient Adversarial Training with Transferable Adversarial Examples
Figure 2 for Efficient Adversarial Training with Transferable Adversarial Examples
Figure 3 for Efficient Adversarial Training with Transferable Adversarial Examples
Figure 4 for Efficient Adversarial Training with Transferable Adversarial Examples
Viaarxiv icon

Can Attention Masks Improve Adversarial Robustness?

Add code
Dec 21, 2019
Figure 1 for Can Attention Masks Improve Adversarial Robustness?
Figure 2 for Can Attention Masks Improve Adversarial Robustness?
Figure 3 for Can Attention Masks Improve Adversarial Robustness?
Figure 4 for Can Attention Masks Improve Adversarial Robustness?
Viaarxiv icon

Transferable Adversarial Robustness using Adversarially Trained Autoencoders

Add code
Sep 12, 2019
Figure 1 for Transferable Adversarial Robustness using Adversarially Trained Autoencoders
Figure 2 for Transferable Adversarial Robustness using Adversarially Trained Autoencoders
Figure 3 for Transferable Adversarial Robustness using Adversarially Trained Autoencoders
Figure 4 for Transferable Adversarial Robustness using Adversarially Trained Autoencoders
Viaarxiv icon

Robust Classification using Robust Feature Augmentation

Add code
May 31, 2019
Figure 1 for Robust Classification using Robust Feature Augmentation
Figure 2 for Robust Classification using Robust Feature Augmentation
Figure 3 for Robust Classification using Robust Feature Augmentation
Figure 4 for Robust Classification using Robust Feature Augmentation
Viaarxiv icon

Analyzing the Interpretability Robustness of Self-Explaining Models

Add code
May 27, 2019
Figure 1 for Analyzing the Interpretability Robustness of Self-Explaining Models
Figure 2 for Analyzing the Interpretability Robustness of Self-Explaining Models
Figure 3 for Analyzing the Interpretability Robustness of Self-Explaining Models
Figure 4 for Analyzing the Interpretability Robustness of Self-Explaining Models
Viaarxiv icon

Designing Adversarially Resilient Classifiers using Resilient Feature Engineering

Add code
Dec 17, 2018
Figure 1 for Designing Adversarially Resilient Classifiers using Resilient Feature Engineering
Figure 2 for Designing Adversarially Resilient Classifiers using Resilient Feature Engineering
Viaarxiv icon

Physical Adversarial Examples for Object Detectors

Add code
Oct 05, 2018
Figure 1 for Physical Adversarial Examples for Object Detectors
Figure 2 for Physical Adversarial Examples for Object Detectors
Figure 3 for Physical Adversarial Examples for Object Detectors
Figure 4 for Physical Adversarial Examples for Object Detectors
Viaarxiv icon

Note on Attacking Object Detectors with Adversarial Stickers

Add code
Jul 23, 2018
Figure 1 for Note on Attacking Object Detectors with Adversarial Stickers
Figure 2 for Note on Attacking Object Detectors with Adversarial Stickers
Figure 3 for Note on Attacking Object Detectors with Adversarial Stickers
Viaarxiv icon

Robust Physical-World Attacks on Deep Learning Models

Add code
Apr 10, 2018
Figure 1 for Robust Physical-World Attacks on Deep Learning Models
Figure 2 for Robust Physical-World Attacks on Deep Learning Models
Figure 3 for Robust Physical-World Attacks on Deep Learning Models
Figure 4 for Robust Physical-World Attacks on Deep Learning Models
Viaarxiv icon