Alert button
Picture for Sravanti Addepalli

Sravanti Addepalli

Alert button

Distilling from Vision-Language Models for Improved OOD Generalization in Vision Tasks

Add code
Bookmark button
Alert button
Oct 12, 2023
Sravanti Addepalli, Ashish Ramayee Asokan, Lakshay Sharma, R. Venkatesh Babu

Figure 1 for Distilling from Vision-Language Models for Improved OOD Generalization in Vision Tasks
Figure 2 for Distilling from Vision-Language Models for Improved OOD Generalization in Vision Tasks
Figure 3 for Distilling from Vision-Language Models for Improved OOD Generalization in Vision Tasks
Figure 4 for Distilling from Vision-Language Models for Improved OOD Generalization in Vision Tasks
Viaarxiv icon

Boosting Adversarial Robustness using Feature Level Stochastic Smoothing

Add code
Bookmark button
Alert button
Jun 10, 2023
Sravanti Addepalli, Samyak Jain, Gaurang Sriramanan, R. Venkatesh Babu

Figure 1 for Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
Figure 2 for Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
Figure 3 for Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
Viaarxiv icon

Certified Adversarial Robustness Within Multiple Perturbation Bounds

Add code
Bookmark button
Alert button
Apr 20, 2023
Soumalya Nandi, Sravanti Addepalli, Harsh Rangwani, R. Venkatesh Babu

Figure 1 for Certified Adversarial Robustness Within Multiple Perturbation Bounds
Figure 2 for Certified Adversarial Robustness Within Multiple Perturbation Bounds
Figure 3 for Certified Adversarial Robustness Within Multiple Perturbation Bounds
Figure 4 for Certified Adversarial Robustness Within Multiple Perturbation Bounds
Viaarxiv icon

DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks

Add code
Bookmark button
Alert button
Feb 28, 2023
Samyak Jain, Sravanti Addepalli, Pawan Sahu, Priyam Dey, R. Venkatesh Babu

Figure 1 for DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks
Figure 2 for DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks
Figure 3 for DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks
Figure 4 for DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks
Viaarxiv icon

Efficient and Effective Augmentation Strategy for Adversarial Training

Add code
Bookmark button
Alert button
Oct 27, 2022
Sravanti Addepalli, Samyak Jain, R. Venkatesh Babu

Figure 1 for Efficient and Effective Augmentation Strategy for Adversarial Training
Figure 2 for Efficient and Effective Augmentation Strategy for Adversarial Training
Figure 3 for Efficient and Effective Augmentation Strategy for Adversarial Training
Figure 4 for Efficient and Effective Augmentation Strategy for Adversarial Training
Viaarxiv icon

Towards Efficient and Effective Self-Supervised Learning of Visual Representations

Add code
Bookmark button
Alert button
Oct 18, 2022
Sravanti Addepalli, Kaushal Bhogale, Priyam Dey, R. Venkatesh Babu

Figure 1 for Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Figure 2 for Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Figure 3 for Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Figure 4 for Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Viaarxiv icon

Scaling Adversarial Training to Large Perturbation Bounds

Add code
Bookmark button
Alert button
Oct 18, 2022
Sravanti Addepalli, Samyak Jain, Gaurang Sriramanan, R. Venkatesh Babu

Figure 1 for Scaling Adversarial Training to Large Perturbation Bounds
Figure 2 for Scaling Adversarial Training to Large Perturbation Bounds
Figure 3 for Scaling Adversarial Training to Large Perturbation Bounds
Figure 4 for Scaling Adversarial Training to Large Perturbation Bounds
Viaarxiv icon

Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks

Add code
Bookmark button
Alert button
Oct 04, 2022
Sravanti Addepalli, Anshul Nasery, R. Venkatesh Babu, Praneeth Netrapalli, Prateek Jain

Figure 1 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 2 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 3 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 4 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Viaarxiv icon

DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization

Add code
Bookmark button
Alert button
Aug 19, 2022
Anshul Nasery, Sravanti Addepalli, Praneeth Netrapalli, Prateek Jain

Figure 1 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 2 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 3 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 4 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Viaarxiv icon

Towards Data-Free Model Stealing in a Hard Label Setting

Add code
Bookmark button
Alert button
Apr 23, 2022
Sunandini Sanyal, Sravanti Addepalli, R. Venkatesh Babu

Figure 1 for Towards Data-Free Model Stealing in a Hard Label Setting
Figure 2 for Towards Data-Free Model Stealing in a Hard Label Setting
Figure 3 for Towards Data-Free Model Stealing in a Hard Label Setting
Figure 4 for Towards Data-Free Model Stealing in a Hard Label Setting
Viaarxiv icon