Picture for Eric Wallace

Eric Wallace

Tony

Deduplicating Training Data Mitigates Privacy Risks in Language Models

Add code
Feb 16, 2022
Figure 1 for Deduplicating Training Data Mitigates Privacy Risks in Language Models
Figure 2 for Deduplicating Training Data Mitigates Privacy Risks in Language Models
Figure 3 for Deduplicating Training Data Mitigates Privacy Risks in Language Models
Figure 4 for Deduplicating Training Data Mitigates Privacy Risks in Language Models
Viaarxiv icon

Analyzing Dynamic Adversarial Training Data in the Limit

Add code
Oct 16, 2021
Figure 1 for Analyzing Dynamic Adversarial Training Data in the Limit
Figure 2 for Analyzing Dynamic Adversarial Training Data in the Limit
Figure 3 for Analyzing Dynamic Adversarial Training Data in the Limit
Figure 4 for Analyzing Dynamic Adversarial Training Data in the Limit
Viaarxiv icon

Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models

Add code
Jul 01, 2021
Figure 1 for Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
Figure 2 for Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
Figure 3 for Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
Figure 4 for Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
Viaarxiv icon

Detoxifying Language Models Risks Marginalizing Minority Voices

Add code
Apr 13, 2021
Figure 1 for Detoxifying Language Models Risks Marginalizing Minority Voices
Figure 2 for Detoxifying Language Models Risks Marginalizing Minority Voices
Figure 3 for Detoxifying Language Models Risks Marginalizing Minority Voices
Figure 4 for Detoxifying Language Models Risks Marginalizing Minority Voices
Viaarxiv icon

Calibrate Before Use: Improving Few-Shot Performance of Language Models

Add code
Feb 19, 2021
Figure 1 for Calibrate Before Use: Improving Few-Shot Performance of Language Models
Figure 2 for Calibrate Before Use: Improving Few-Shot Performance of Language Models
Figure 3 for Calibrate Before Use: Improving Few-Shot Performance of Language Models
Figure 4 for Calibrate Before Use: Improving Few-Shot Performance of Language Models
Viaarxiv icon

Extracting Training Data from Large Language Models

Add code
Dec 14, 2020
Figure 1 for Extracting Training Data from Large Language Models
Figure 2 for Extracting Training Data from Large Language Models
Figure 3 for Extracting Training Data from Large Language Models
Figure 4 for Extracting Training Data from Large Language Models
Viaarxiv icon

AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts

Add code
Nov 07, 2020
Figure 1 for AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
Figure 2 for AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
Figure 3 for AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
Figure 4 for AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
Viaarxiv icon

Customizing Triggers with Concealed Data Poisoning

Add code
Oct 23, 2020
Figure 1 for Customizing Triggers with Concealed Data Poisoning
Figure 2 for Customizing Triggers with Concealed Data Poisoning
Figure 3 for Customizing Triggers with Concealed Data Poisoning
Figure 4 for Customizing Triggers with Concealed Data Poisoning
Viaarxiv icon

Gradient-based Analysis of NLP Models is Manipulable

Add code
Oct 12, 2020
Figure 1 for Gradient-based Analysis of NLP Models is Manipulable
Figure 2 for Gradient-based Analysis of NLP Models is Manipulable
Figure 3 for Gradient-based Analysis of NLP Models is Manipulable
Figure 4 for Gradient-based Analysis of NLP Models is Manipulable
Viaarxiv icon

Trustworthy AI Inference Systems: An Industry Research View

Add code
Aug 10, 2020
Viaarxiv icon