Alert button
Picture for Arjun Nitin Bhagoji

Arjun Nitin Bhagoji

Alert button

Towards Scalable and Robust Model Versioning

Jan 17, 2024
Wenxin Ding, Arjun Nitin Bhagoji, Ben Y. Zhao, Haitao Zheng

Viaarxiv icon

Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker

Feb 21, 2023
Sihui Dai, Wenxin Ding, Arjun Nitin Bhagoji, Daniel Cullina, Ben Y. Zhao, Haitao Zheng, Prateek Mittal

Figure 1 for Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker
Figure 2 for Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker
Figure 3 for Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker
Figure 4 for Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker
Viaarxiv icon

Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning

Feb 03, 2023
Jacob Alexander Markson Brown, Xi Jiang, Van Tran, Arjun Nitin Bhagoji, Nguyen Phong Hoang, Nick Feamster, Prateek Mittal, Vinod Yegneswaran

Figure 1 for Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning
Figure 2 for Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning
Figure 3 for Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning
Figure 4 for Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning
Viaarxiv icon

Natural Backdoor Datasets

Jun 21, 2022
Emily Wenger, Roma Bhattacharjee, Arjun Nitin Bhagoji, Josephine Passananti, Emilio Andere, Haitao Zheng, Ben Y. Zhao

Figure 1 for Natural Backdoor Datasets
Figure 2 for Natural Backdoor Datasets
Figure 3 for Natural Backdoor Datasets
Figure 4 for Natural Backdoor Datasets
Viaarxiv icon

Understanding Robust Learning through the Lens of Representation Similarities

Jun 20, 2022
Christian Cianfarani, Arjun Nitin Bhagoji, Vikash Sehwag, Ben Zhao, Prateek Mittal

Figure 1 for Understanding Robust Learning through the Lens of Representation Similarities
Figure 2 for Understanding Robust Learning through the Lens of Representation Similarities
Figure 3 for Understanding Robust Learning through the Lens of Representation Similarities
Figure 4 for Understanding Robust Learning through the Lens of Representation Similarities
Viaarxiv icon

Can Backdoor Attacks Survive Time-Varying Models?

Jun 08, 2022
Huiying Li, Arjun Nitin Bhagoji, Ben Y. Zhao, Haitao Zheng

Figure 1 for Can Backdoor Attacks Survive Time-Varying Models?
Figure 2 for Can Backdoor Attacks Survive Time-Varying Models?
Figure 3 for Can Backdoor Attacks Survive Time-Varying Models?
Figure 4 for Can Backdoor Attacks Survive Time-Varying Models?
Viaarxiv icon

Traceback of Data Poisoning Attacks in Neural Networks

Oct 13, 2021
Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, Ben Y. Zhao

Figure 1 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 2 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 3 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 4 for Traceback of Data Poisoning Attacks in Neural Networks
Viaarxiv icon

Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries

Apr 16, 2021
Arjun Nitin Bhagoji, Daniel Cullina, Vikash Sehwag, Prateek Mittal

Figure 1 for Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Figure 2 for Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Figure 3 for Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Figure 4 for Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Viaarxiv icon

A Critical Evaluation of Open-World Machine Learning

Jul 08, 2020
Liwei Song, Vikash Sehwag, Arjun Nitin Bhagoji, Prateek Mittal

Figure 1 for A Critical Evaluation of Open-World Machine Learning
Figure 2 for A Critical Evaluation of Open-World Machine Learning
Figure 3 for A Critical Evaluation of Open-World Machine Learning
Figure 4 for A Critical Evaluation of Open-World Machine Learning
Viaarxiv icon

PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields

Jun 08, 2020
Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, Prateek Mittal

Figure 1 for PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
Figure 2 for PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
Figure 3 for PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
Figure 4 for PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
Viaarxiv icon