Picture for Arjun Nitin Bhagoji

Arjun Nitin Bhagoji

Towards Scalable and Robust Model Versioning

Add code
Jan 17, 2024
Figure 1 for Towards Scalable and Robust Model Versioning
Figure 2 for Towards Scalable and Robust Model Versioning
Figure 3 for Towards Scalable and Robust Model Versioning
Figure 4 for Towards Scalable and Robust Model Versioning
Viaarxiv icon

Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker

Add code
Feb 21, 2023
Figure 1 for Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker
Figure 2 for Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker
Figure 3 for Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker
Figure 4 for Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker
Viaarxiv icon

Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning

Add code
Feb 03, 2023
Figure 1 for Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning
Figure 2 for Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning
Figure 3 for Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning
Figure 4 for Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning
Viaarxiv icon

Natural Backdoor Datasets

Add code
Jun 21, 2022
Figure 1 for Natural Backdoor Datasets
Figure 2 for Natural Backdoor Datasets
Figure 3 for Natural Backdoor Datasets
Figure 4 for Natural Backdoor Datasets
Viaarxiv icon

Understanding Robust Learning through the Lens of Representation Similarities

Add code
Jun 20, 2022
Figure 1 for Understanding Robust Learning through the Lens of Representation Similarities
Figure 2 for Understanding Robust Learning through the Lens of Representation Similarities
Figure 3 for Understanding Robust Learning through the Lens of Representation Similarities
Figure 4 for Understanding Robust Learning through the Lens of Representation Similarities
Viaarxiv icon

Can Backdoor Attacks Survive Time-Varying Models?

Add code
Jun 08, 2022
Figure 1 for Can Backdoor Attacks Survive Time-Varying Models?
Figure 2 for Can Backdoor Attacks Survive Time-Varying Models?
Figure 3 for Can Backdoor Attacks Survive Time-Varying Models?
Figure 4 for Can Backdoor Attacks Survive Time-Varying Models?
Viaarxiv icon

Traceback of Data Poisoning Attacks in Neural Networks

Add code
Oct 13, 2021
Figure 1 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 2 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 3 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 4 for Traceback of Data Poisoning Attacks in Neural Networks
Viaarxiv icon

Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries

Add code
Apr 16, 2021
Figure 1 for Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Figure 2 for Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Figure 3 for Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Figure 4 for Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Viaarxiv icon

A Critical Evaluation of Open-World Machine Learning

Add code
Jul 08, 2020
Figure 1 for A Critical Evaluation of Open-World Machine Learning
Figure 2 for A Critical Evaluation of Open-World Machine Learning
Figure 3 for A Critical Evaluation of Open-World Machine Learning
Figure 4 for A Critical Evaluation of Open-World Machine Learning
Viaarxiv icon

PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields

Add code
Jun 08, 2020
Figure 1 for PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
Figure 2 for PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
Figure 3 for PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
Figure 4 for PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
Viaarxiv icon