Picture for Sravanti Addepalli

Sravanti Addepalli

ProFeAT: Projected Feature Adversarial Training for Self-Supervised Learning of Robust Representations

Add code
Jun 09, 2024
Figure 1 for ProFeAT: Projected Feature Adversarial Training for Self-Supervised Learning of Robust Representations
Figure 2 for ProFeAT: Projected Feature Adversarial Training for Self-Supervised Learning of Robust Representations
Figure 3 for ProFeAT: Projected Feature Adversarial Training for Self-Supervised Learning of Robust Representations
Figure 4 for ProFeAT: Projected Feature Adversarial Training for Self-Supervised Learning of Robust Representations
Viaarxiv icon

Distilling from Vision-Language Models for Improved OOD Generalization in Vision Tasks

Add code
Oct 12, 2023
Figure 1 for Distilling from Vision-Language Models for Improved OOD Generalization in Vision Tasks
Figure 2 for Distilling from Vision-Language Models for Improved OOD Generalization in Vision Tasks
Figure 3 for Distilling from Vision-Language Models for Improved OOD Generalization in Vision Tasks
Figure 4 for Distilling from Vision-Language Models for Improved OOD Generalization in Vision Tasks
Viaarxiv icon

Boosting Adversarial Robustness using Feature Level Stochastic Smoothing

Add code
Jun 10, 2023
Figure 1 for Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
Figure 2 for Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
Figure 3 for Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
Viaarxiv icon

Certified Adversarial Robustness Within Multiple Perturbation Bounds

Add code
Apr 20, 2023
Figure 1 for Certified Adversarial Robustness Within Multiple Perturbation Bounds
Figure 2 for Certified Adversarial Robustness Within Multiple Perturbation Bounds
Figure 3 for Certified Adversarial Robustness Within Multiple Perturbation Bounds
Figure 4 for Certified Adversarial Robustness Within Multiple Perturbation Bounds
Viaarxiv icon

DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks

Add code
Feb 28, 2023
Figure 1 for DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks
Figure 2 for DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks
Figure 3 for DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks
Figure 4 for DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks
Viaarxiv icon

Efficient and Effective Augmentation Strategy for Adversarial Training

Add code
Oct 27, 2022
Figure 1 for Efficient and Effective Augmentation Strategy for Adversarial Training
Figure 2 for Efficient and Effective Augmentation Strategy for Adversarial Training
Figure 3 for Efficient and Effective Augmentation Strategy for Adversarial Training
Figure 4 for Efficient and Effective Augmentation Strategy for Adversarial Training
Viaarxiv icon

Towards Efficient and Effective Self-Supervised Learning of Visual Representations

Add code
Oct 18, 2022
Figure 1 for Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Figure 2 for Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Figure 3 for Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Figure 4 for Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Viaarxiv icon

Scaling Adversarial Training to Large Perturbation Bounds

Add code
Oct 18, 2022
Figure 1 for Scaling Adversarial Training to Large Perturbation Bounds
Figure 2 for Scaling Adversarial Training to Large Perturbation Bounds
Figure 3 for Scaling Adversarial Training to Large Perturbation Bounds
Figure 4 for Scaling Adversarial Training to Large Perturbation Bounds
Viaarxiv icon

Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks

Add code
Oct 04, 2022
Figure 1 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 2 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 3 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 4 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Viaarxiv icon

DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization

Add code
Aug 19, 2022
Figure 1 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 2 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 3 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 4 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Viaarxiv icon