Picture for Furong Huang

Furong Huang

SAFLEX: Self-Adaptive Augmentation via Feature Label Extrapolation

Add code
Oct 03, 2024
Figure 1 for SAFLEX: Self-Adaptive Augmentation via Feature Label Extrapolation
Figure 2 for SAFLEX: Self-Adaptive Augmentation via Feature Label Extrapolation
Figure 3 for SAFLEX: Self-Adaptive Augmentation via Feature Label Extrapolation
Figure 4 for SAFLEX: Self-Adaptive Augmentation via Feature Label Extrapolation
Viaarxiv icon

Auction-Based Regulation for Artificial Intelligence

Add code
Oct 02, 2024
Figure 1 for Auction-Based Regulation for Artificial Intelligence
Figure 2 for Auction-Based Regulation for Artificial Intelligence
Figure 3 for Auction-Based Regulation for Artificial Intelligence
Figure 4 for Auction-Based Regulation for Artificial Intelligence
Viaarxiv icon

Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization

Add code
Sep 27, 2024
Figure 1 for Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization
Figure 2 for Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization
Figure 3 for Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization
Figure 4 for Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization
Viaarxiv icon

Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models

Add code
Sep 01, 2024
Figure 1 for Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models
Figure 2 for Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models
Figure 3 for Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models
Figure 4 for Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models
Viaarxiv icon

Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data?

Add code
Jul 24, 2024
Figure 1 for Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data?
Figure 2 for Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data?
Figure 3 for Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data?
Figure 4 for Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data?
Viaarxiv icon

Make-An-Agent: A Generalizable Policy Network Generator with Behavior-Prompted Diffusion

Add code
Jul 15, 2024
Viaarxiv icon

SAIL: Self-Improving Efficient Online Alignment of Large Language Models

Add code
Jun 21, 2024
Viaarxiv icon

Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity

Add code
Jun 21, 2024
Figure 1 for Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity
Figure 2 for Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity
Figure 3 for Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity
Viaarxiv icon

Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge Distillation

Add code
Jun 19, 2024
Figure 1 for Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge Distillation
Figure 2 for Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge Distillation
Figure 3 for Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge Distillation
Figure 4 for Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge Distillation
Viaarxiv icon

Adversarial Attacks on Large Language Models in Medicine

Add code
Jun 18, 2024
Figure 1 for Adversarial Attacks on Large Language Models in Medicine
Figure 2 for Adversarial Attacks on Large Language Models in Medicine
Figure 3 for Adversarial Attacks on Large Language Models in Medicine
Figure 4 for Adversarial Attacks on Large Language Models in Medicine
Viaarxiv icon