Alert button
Picture for Nicolas Papernot

Nicolas Papernot

Alert button

Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings

Add code
Bookmark button
Alert button
Oct 13, 2020
Vinith M. Suriyakumar, Nicolas Papernot, Anna Goldenberg, Marzyeh Ghassemi

Figure 1 for Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings
Figure 2 for Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings
Figure 3 for Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings
Figure 4 for Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings
Viaarxiv icon

Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media

Add code
Bookmark button
Alert button
Aug 20, 2020
Baiwu Zhang, Jin Peng Zhou, Ilia Shumailov, Nicolas Papernot

Figure 1 for Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media
Figure 2 for Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media
Figure 3 for Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media
Figure 4 for Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media
Viaarxiv icon

Label-Only Membership Inference Attacks

Add code
Bookmark button
Alert button
Jul 28, 2020
Christopher A. Choquette Choo, Florian Tramer, Nicholas Carlini, Nicolas Papernot

Figure 1 for Label-Only Membership Inference Attacks
Figure 2 for Label-Only Membership Inference Attacks
Figure 3 for Label-Only Membership Inference Attacks
Figure 4 for Label-Only Membership Inference Attacks
Viaarxiv icon

Tempered Sigmoid Activations for Deep Learning with Differential Privacy

Add code
Bookmark button
Alert button
Jul 28, 2020
Nicolas Papernot, Abhradeep Thakurta, Shuang Song, Steve Chien, Úlfar Erlingsson

Figure 1 for Tempered Sigmoid Activations for Deep Learning with Differential Privacy
Figure 2 for Tempered Sigmoid Activations for Deep Learning with Differential Privacy
Figure 3 for Tempered Sigmoid Activations for Deep Learning with Differential Privacy
Figure 4 for Tempered Sigmoid Activations for Deep Learning with Differential Privacy
Viaarxiv icon

SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems

Add code
Bookmark button
Alert button
Jul 21, 2020
Hadi Abdullah, Kevin Warren, Vincent Bindschaedler, Nicolas Papernot, Patrick Traynor

Figure 1 for SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
Figure 2 for SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
Figure 3 for SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
Figure 4 for SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
Viaarxiv icon

The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems

Add code
Bookmark button
Alert button
Jul 13, 2020
Hadi Abdullah, Kevin Warren, Vincent Bindschaedler, Nicolas Papernot, Patrick Traynor

Figure 1 for The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
Figure 2 for The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
Figure 3 for The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
Figure 4 for The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
Viaarxiv icon

Sponge Examples: Energy-Latency Attacks on Neural Networks

Add code
Bookmark button
Alert button
Jun 05, 2020
Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert Mullins, Ross Anderson

Figure 1 for Sponge Examples: Energy-Latency Attacks on Neural Networks
Figure 2 for Sponge Examples: Energy-Latency Attacks on Neural Networks
Figure 3 for Sponge Examples: Energy-Latency Attacks on Neural Networks
Figure 4 for Sponge Examples: Energy-Latency Attacks on Neural Networks
Viaarxiv icon

On the Robustness of Cooperative Multi-Agent Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 08, 2020
Jieyu Lin, Kristina Dzeparoska, Sai Qian Zhang, Alberto Leon-Garcia, Nicolas Papernot

Figure 1 for On the Robustness of Cooperative Multi-Agent Reinforcement Learning
Figure 2 for On the Robustness of Cooperative Multi-Agent Reinforcement Learning
Figure 3 for On the Robustness of Cooperative Multi-Agent Reinforcement Learning
Figure 4 for On the Robustness of Cooperative Multi-Agent Reinforcement Learning
Viaarxiv icon

On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping

Add code
Bookmark button
Alert button
Feb 27, 2020
Sanghyun Hong, Varun Chandrasekaran, Yiğitcan Kaya, Tudor Dumitraş, Nicolas Papernot

Figure 1 for On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Figure 2 for On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Figure 3 for On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Figure 4 for On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Viaarxiv icon