Alert button
Picture for Florian Tramer

Florian Tramer

Alert button

Extracting Training Data from Large Language Models

Add code
Bookmark button
Alert button
Dec 14, 2020
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, Colin Raffel

Figure 1 for Extracting Training Data from Large Language Models
Figure 2 for Extracting Training Data from Large Language Models
Figure 3 for Extracting Training Data from Large Language Models
Figure 4 for Extracting Training Data from Large Language Models
Viaarxiv icon

An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?

Add code
Bookmark button
Alert button
Nov 10, 2020
Nicholas Carlini, Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Shuang Song, Abhradeep Thakurta, Florian Tramer

Figure 1 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Figure 2 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Figure 3 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Figure 4 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Viaarxiv icon

Label-Only Membership Inference Attacks

Add code
Bookmark button
Alert button
Jul 28, 2020
Christopher A. Choquette Choo, Florian Tramer, Nicholas Carlini, Nicolas Papernot

Figure 1 for Label-Only Membership Inference Attacks
Figure 2 for Label-Only Membership Inference Attacks
Figure 3 for Label-Only Membership Inference Attacks
Figure 4 for Label-Only Membership Inference Attacks
Viaarxiv icon

On Adaptive Attacks to Adversarial Example Defenses

Add code
Bookmark button
Alert button
Feb 19, 2020
Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry

Figure 1 for On Adaptive Attacks to Adversarial Example Defenses
Figure 2 for On Adaptive Attacks to Adversarial Example Defenses
Viaarxiv icon

Physical Adversarial Examples for Object Detectors

Add code
Bookmark button
Alert button
Oct 05, 2018
Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, Dawn Song

Figure 1 for Physical Adversarial Examples for Object Detectors
Figure 2 for Physical Adversarial Examples for Object Detectors
Figure 3 for Physical Adversarial Examples for Object Detectors
Figure 4 for Physical Adversarial Examples for Object Detectors
Viaarxiv icon

Note on Attacking Object Detectors with Adversarial Stickers

Add code
Bookmark button
Alert button
Jul 23, 2018
Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Dawn Song, Tadayoshi Kohno, Amir Rahmati, Atul Prakash, Florian Tramer

Figure 1 for Note on Attacking Object Detectors with Adversarial Stickers
Figure 2 for Note on Attacking Object Detectors with Adversarial Stickers
Figure 3 for Note on Attacking Object Detectors with Adversarial Stickers
Viaarxiv icon

Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware

Add code
Bookmark button
Alert button
Jun 08, 2018
Florian Tramer, Dan Boneh

Figure 1 for Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
Figure 2 for Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
Figure 3 for Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
Figure 4 for Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
Viaarxiv icon