Picture for Jiachun Li

Jiachun Li

RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models

Add code
Jun 16, 2024
Figure 1 for RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models
Figure 2 for RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models
Figure 3 for RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models
Figure 4 for RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models
Viaarxiv icon

Towards Faithful Chain-of-Thought: Large Language Models are Bridging Reasoners

Add code
May 29, 2024
Viaarxiv icon

On the Optimal Regret of Locally Private Linear Contextual Bandit

Add code
Apr 15, 2024
Figure 1 for On the Optimal Regret of Locally Private Linear Contextual Bandit
Figure 2 for On the Optimal Regret of Locally Private Linear Contextual Bandit
Viaarxiv icon

Focus on Your Question! Interpreting and Mitigating Toxic CoT Problems in Commonsense Reasoning

Add code
Feb 28, 2024
Viaarxiv icon

Privacy Preserving Adaptive Experiment Design

Add code
Feb 05, 2024
Viaarxiv icon

Fight Fire with Fire: Combating Adversarial Patch Attacks using Pattern-randomized Defensive Patches

Add code
Nov 10, 2023
Viaarxiv icon

We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature

Add code
Jun 10, 2021
Figure 1 for We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature
Figure 2 for We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature
Figure 3 for We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature
Figure 4 for We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature
Viaarxiv icon