Picture for Florian Tramer

Florian Tramer

Dj

Quantifying Memorization Across Neural Language Models

Add code
Feb 24, 2022
Figure 1 for Quantifying Memorization Across Neural Language Models
Figure 2 for Quantifying Memorization Across Neural Language Models
Figure 3 for Quantifying Memorization Across Neural Language Models
Figure 4 for Quantifying Memorization Across Neural Language Models
Viaarxiv icon

Membership Inference Attacks From First Principles

Add code
Dec 07, 2021
Figure 1 for Membership Inference Attacks From First Principles
Figure 2 for Membership Inference Attacks From First Principles
Figure 3 for Membership Inference Attacks From First Principles
Figure 4 for Membership Inference Attacks From First Principles
Viaarxiv icon

Extracting Training Data from Large Language Models

Add code
Dec 14, 2020
Figure 1 for Extracting Training Data from Large Language Models
Figure 2 for Extracting Training Data from Large Language Models
Figure 3 for Extracting Training Data from Large Language Models
Figure 4 for Extracting Training Data from Large Language Models
Viaarxiv icon

An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?

Add code
Nov 10, 2020
Figure 1 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Figure 2 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Figure 3 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Figure 4 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Viaarxiv icon

Label-Only Membership Inference Attacks

Add code
Jul 28, 2020
Figure 1 for Label-Only Membership Inference Attacks
Figure 2 for Label-Only Membership Inference Attacks
Figure 3 for Label-Only Membership Inference Attacks
Figure 4 for Label-Only Membership Inference Attacks
Viaarxiv icon

On Adaptive Attacks to Adversarial Example Defenses

Add code
Feb 19, 2020
Figure 1 for On Adaptive Attacks to Adversarial Example Defenses
Figure 2 for On Adaptive Attacks to Adversarial Example Defenses
Viaarxiv icon

Physical Adversarial Examples for Object Detectors

Add code
Oct 05, 2018
Figure 1 for Physical Adversarial Examples for Object Detectors
Figure 2 for Physical Adversarial Examples for Object Detectors
Figure 3 for Physical Adversarial Examples for Object Detectors
Figure 4 for Physical Adversarial Examples for Object Detectors
Viaarxiv icon

Note on Attacking Object Detectors with Adversarial Stickers

Add code
Jul 23, 2018
Figure 1 for Note on Attacking Object Detectors with Adversarial Stickers
Figure 2 for Note on Attacking Object Detectors with Adversarial Stickers
Figure 3 for Note on Attacking Object Detectors with Adversarial Stickers
Viaarxiv icon

Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware

Add code
Jun 08, 2018
Figure 1 for Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
Figure 2 for Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
Figure 3 for Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
Figure 4 for Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
Viaarxiv icon