Picture for Yuxin Wen

Yuxin Wen

GenQA: Generating Millions of Instructions from a Handful of Prompts

Add code
Jun 14, 2024
Viaarxiv icon

Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs

Add code
Jun 14, 2024
Figure 1 for Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs
Figure 2 for Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs
Figure 3 for Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs
Figure 4 for Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs
Viaarxiv icon

Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization

Add code
Apr 02, 2024
Figure 1 for Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization
Figure 2 for Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization
Figure 3 for Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization
Figure 4 for Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization
Viaarxiv icon

Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models

Add code
Apr 01, 2024
Figure 1 for Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models
Figure 2 for Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models
Figure 3 for Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models
Figure 4 for Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models
Viaarxiv icon

Coercing LLMs to do and reveal anything

Add code
Feb 21, 2024
Viaarxiv icon

Benchmarking the Robustness of Image Watermarks

Add code
Jan 22, 2024
Viaarxiv icon

NEFTune: Noisy Embeddings Improve Instruction Finetuning

Add code
Oct 10, 2023
Figure 1 for NEFTune: Noisy Embeddings Improve Instruction Finetuning
Figure 2 for NEFTune: Noisy Embeddings Improve Instruction Finetuning
Figure 3 for NEFTune: Noisy Embeddings Improve Instruction Finetuning
Figure 4 for NEFTune: Noisy Embeddings Improve Instruction Finetuning
Viaarxiv icon

Baseline Defenses for Adversarial Attacks Against Aligned Language Models

Add code
Sep 04, 2023
Figure 1 for Baseline Defenses for Adversarial Attacks Against Aligned Language Models
Figure 2 for Baseline Defenses for Adversarial Attacks Against Aligned Language Models
Figure 3 for Baseline Defenses for Adversarial Attacks Against Aligned Language Models
Figure 4 for Baseline Defenses for Adversarial Attacks Against Aligned Language Models
Viaarxiv icon

On the Reliability of Watermarks for Large Language Models

Add code
Jun 30, 2023
Figure 1 for On the Reliability of Watermarks for Large Language Models
Figure 2 for On the Reliability of Watermarks for Large Language Models
Figure 3 for On the Reliability of Watermarks for Large Language Models
Figure 4 for On the Reliability of Watermarks for Large Language Models
Viaarxiv icon

Bring Your Own Data! Self-Supervised Evaluation for Large Language Models

Add code
Jun 29, 2023
Figure 1 for Bring Your Own Data! Self-Supervised Evaluation for Large Language Models
Figure 2 for Bring Your Own Data! Self-Supervised Evaluation for Large Language Models
Figure 3 for Bring Your Own Data! Self-Supervised Evaluation for Large Language Models
Figure 4 for Bring Your Own Data! Self-Supervised Evaluation for Large Language Models
Viaarxiv icon