Alert button
Picture for Prateek Mittal

Prateek Mittal

Alert button

RobustBench: a standardized adversarial robustness benchmark

Add code
Bookmark button
Alert button
Oct 19, 2020
Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Nicolas Flammarion, Mung Chiang, Prateek Mittal, Matthias Hein

Figure 1 for RobustBench: a standardized adversarial robustness benchmark
Figure 2 for RobustBench: a standardized adversarial robustness benchmark
Figure 3 for RobustBench: a standardized adversarial robustness benchmark
Figure 4 for RobustBench: a standardized adversarial robustness benchmark
Viaarxiv icon

A Critical Evaluation of Open-World Machine Learning

Add code
Bookmark button
Alert button
Jul 08, 2020
Liwei Song, Vikash Sehwag, Arjun Nitin Bhagoji, Prateek Mittal

Figure 1 for A Critical Evaluation of Open-World Machine Learning
Figure 2 for A Critical Evaluation of Open-World Machine Learning
Figure 3 for A Critical Evaluation of Open-World Machine Learning
Figure 4 for A Critical Evaluation of Open-World Machine Learning
Viaarxiv icon

Time for a Background Check! Uncovering the impact of Background Features on Deep Neural Networks

Add code
Bookmark button
Alert button
Jun 24, 2020
Vikash Sehwag, Rajvardhan Oak, Mung Chiang, Prateek Mittal

Figure 1 for Time for a Background Check! Uncovering the impact of Background Features on Deep Neural Networks
Figure 2 for Time for a Background Check! Uncovering the impact of Background Features on Deep Neural Networks
Figure 3 for Time for a Background Check! Uncovering the impact of Background Features on Deep Neural Networks
Figure 4 for Time for a Background Check! Uncovering the impact of Background Features on Deep Neural Networks
Viaarxiv icon

PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields

Add code
Bookmark button
Alert button
Jun 08, 2020
Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, Prateek Mittal

Figure 1 for PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
Figure 2 for PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
Figure 3 for PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
Figure 4 for PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
Viaarxiv icon

FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning

Add code
Bookmark button
Alert button
Apr 05, 2020
Sameer Wagh, Shruti Tople, Fabrice Benhamouda, Eyal Kushilevitz, Prateek Mittal, Tal Rabin

Figure 1 for FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning
Figure 2 for FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning
Figure 3 for FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning
Figure 4 for FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning
Viaarxiv icon

Systematic Evaluation of Privacy Risks of Machine Learning Models

Add code
Bookmark button
Alert button
Mar 24, 2020
Liwei Song, Prateek Mittal

Figure 1 for Systematic Evaluation of Privacy Risks of Machine Learning Models
Figure 2 for Systematic Evaluation of Privacy Risks of Machine Learning Models
Figure 3 for Systematic Evaluation of Privacy Risks of Machine Learning Models
Figure 4 for Systematic Evaluation of Privacy Risks of Machine Learning Models
Viaarxiv icon

Towards Probabilistic Verification of Machine Unlearning

Add code
Bookmark button
Alert button
Mar 09, 2020
David Marco Sommer, Liwei Song, Sameer Wagh, Prateek Mittal

Figure 1 for Towards Probabilistic Verification of Machine Unlearning
Figure 2 for Towards Probabilistic Verification of Machine Unlearning
Figure 3 for Towards Probabilistic Verification of Machine Unlearning
Figure 4 for Towards Probabilistic Verification of Machine Unlearning
Viaarxiv icon

On Pruning Adversarially Robust Neural Networks

Add code
Bookmark button
Alert button
Feb 24, 2020
Vikash Sehwag, Shiqi Wang, Prateek Mittal, Suman Jana

Figure 1 for On Pruning Adversarially Robust Neural Networks
Figure 2 for On Pruning Adversarially Robust Neural Networks
Figure 3 for On Pruning Adversarially Robust Neural Networks
Figure 4 for On Pruning Adversarially Robust Neural Networks
Viaarxiv icon

Advances and Open Problems in Federated Learning

Add code
Bookmark button
Alert button
Dec 10, 2019
Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, Sen Zhao

Figure 1 for Advances and Open Problems in Federated Learning
Figure 2 for Advances and Open Problems in Federated Learning
Figure 3 for Advances and Open Problems in Federated Learning
Figure 4 for Advances and Open Problems in Federated Learning
Viaarxiv icon