Picture for Muhao Chen

Muhao Chen

University of California Davis

Contrastive Instruction Tuning

Add code
Feb 17, 2024
Figure 1 for Contrastive Instruction Tuning
Figure 2 for Contrastive Instruction Tuning
Figure 3 for Contrastive Instruction Tuning
Figure 4 for Contrastive Instruction Tuning
Viaarxiv icon

Privacy-Preserving Language Model Inference with Instance Obfuscation

Add code
Feb 13, 2024
Figure 1 for Privacy-Preserving Language Model Inference with Instance Obfuscation
Figure 2 for Privacy-Preserving Language Model Inference with Instance Obfuscation
Figure 3 for Privacy-Preserving Language Model Inference with Instance Obfuscation
Figure 4 for Privacy-Preserving Language Model Inference with Instance Obfuscation
Viaarxiv icon

Instructional Fingerprinting of Large Language Models

Add code
Jan 21, 2024
Figure 1 for Instructional Fingerprinting of Large Language Models
Figure 2 for Instructional Fingerprinting of Large Language Models
Figure 3 for Instructional Fingerprinting of Large Language Models
Figure 4 for Instructional Fingerprinting of Large Language Models
Viaarxiv icon

DeepEdit: Knowledge Editing as Decoding with Constraints

Add code
Jan 19, 2024
Viaarxiv icon

Rethinking Tabular Data Understanding with Large Language Models

Add code
Dec 27, 2023
Figure 1 for Rethinking Tabular Data Understanding with Large Language Models
Figure 2 for Rethinking Tabular Data Understanding with Large Language Models
Figure 3 for Rethinking Tabular Data Understanding with Large Language Models
Figure 4 for Rethinking Tabular Data Understanding with Large Language Models
Viaarxiv icon

Deceiving Semantic Shortcuts on Reasoning Chains: How Far Can Models Go without Hallucination?

Add code
Nov 16, 2023
Viaarxiv icon

Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking

Add code
Nov 16, 2023
Figure 1 for Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking
Figure 2 for Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking
Figure 3 for Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking
Figure 4 for Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking
Viaarxiv icon

Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations

Add code
Nov 16, 2023
Figure 1 for Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations
Figure 2 for Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations
Figure 3 for Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations
Figure 4 for Test-time Backdoor Mitigation for Black-Box Large Language Models with Defensive Demonstrations
Viaarxiv icon

On the Exploitability of Reinforcement Learning with Human Feedback for Large Language Models

Add code
Nov 16, 2023
Viaarxiv icon

How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities

Add code
Nov 15, 2023
Figure 1 for How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities
Figure 2 for How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities
Figure 3 for How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities
Figure 4 for How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities
Viaarxiv icon