Picture for Furong Huang

Furong Huang

LIAR: Leveraging Alignment (Best-of-N) to Jailbreak LLMs in Seconds

Add code
Dec 06, 2024
Figure 1 for LIAR: Leveraging Alignment (Best-of-N) to Jailbreak LLMs in Seconds
Figure 2 for LIAR: Leveraging Alignment (Best-of-N) to Jailbreak LLMs in Seconds
Figure 3 for LIAR: Leveraging Alignment (Best-of-N) to Jailbreak LLMs in Seconds
Figure 4 for LIAR: Leveraging Alignment (Best-of-N) to Jailbreak LLMs in Seconds
Viaarxiv icon

Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment

Add code
Nov 27, 2024
Figure 1 for Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment
Figure 2 for Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment
Figure 3 for Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment
Figure 4 for Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment
Viaarxiv icon

Ensuring Safety and Trust: Analyzing the Risks of Large Language Models in Medicine

Add code
Nov 20, 2024
Figure 1 for Ensuring Safety and Trust: Analyzing the Risks of Large Language Models in Medicine
Figure 2 for Ensuring Safety and Trust: Analyzing the Risks of Large Language Models in Medicine
Figure 3 for Ensuring Safety and Trust: Analyzing the Risks of Large Language Models in Medicine
Figure 4 for Ensuring Safety and Trust: Analyzing the Risks of Large Language Models in Medicine
Viaarxiv icon

Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset

Add code
Nov 05, 2024
Figure 1 for Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset
Figure 2 for Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset
Figure 3 for Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset
Figure 4 for Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset
Viaarxiv icon

Statistical Guarantees for Lifelong Reinforcement Learning using PAC-Bayesian Theory

Add code
Nov 01, 2024
Figure 1 for Statistical Guarantees for Lifelong Reinforcement Learning using PAC-Bayesian Theory
Figure 2 for Statistical Guarantees for Lifelong Reinforcement Learning using PAC-Bayesian Theory
Figure 3 for Statistical Guarantees for Lifelong Reinforcement Learning using PAC-Bayesian Theory
Figure 4 for Statistical Guarantees for Lifelong Reinforcement Learning using PAC-Bayesian Theory
Viaarxiv icon

AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment

Add code
Oct 15, 2024
Figure 1 for AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
Figure 2 for AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
Figure 3 for AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
Figure 4 for AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
Viaarxiv icon

GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment

Add code
Oct 10, 2024
Figure 1 for GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment
Figure 2 for GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment
Figure 3 for GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment
Figure 4 for GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment
Viaarxiv icon

Towards Self-Improvement of LLMs via MCTS: Leveraging Stepwise Knowledge with Curriculum Preference Learning

Add code
Oct 09, 2024
Figure 1 for Towards Self-Improvement of LLMs via MCTS: Leveraging Stepwise Knowledge with Curriculum Preference Learning
Figure 2 for Towards Self-Improvement of LLMs via MCTS: Leveraging Stepwise Knowledge with Curriculum Preference Learning
Figure 3 for Towards Self-Improvement of LLMs via MCTS: Leveraging Stepwise Knowledge with Curriculum Preference Learning
Figure 4 for Towards Self-Improvement of LLMs via MCTS: Leveraging Stepwise Knowledge with Curriculum Preference Learning
Viaarxiv icon

EnsemW2S: Can an Ensemble of LLMs be Leveraged to Obtain a Stronger LLM?

Add code
Oct 06, 2024
Figure 1 for EnsemW2S: Can an Ensemble of LLMs be Leveraged to Obtain a Stronger LLM?
Figure 2 for EnsemW2S: Can an Ensemble of LLMs be Leveraged to Obtain a Stronger LLM?
Figure 3 for EnsemW2S: Can an Ensemble of LLMs be Leveraged to Obtain a Stronger LLM?
Figure 4 for EnsemW2S: Can an Ensemble of LLMs be Leveraged to Obtain a Stronger LLM?
Viaarxiv icon

Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance

Add code
Oct 03, 2024
Figure 1 for Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance
Figure 2 for Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance
Figure 3 for Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance
Figure 4 for Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance
Viaarxiv icon