Picture for Wangyue Li

Wangyue Li

Imposter.AI: Adversarial Attacks with Hidden Intentions towards Aligned Large Language Models

Add code
Jul 22, 2024
Figure 1 for Imposter.AI: Adversarial Attacks with Hidden Intentions towards Aligned Large Language Models
Figure 2 for Imposter.AI: Adversarial Attacks with Hidden Intentions towards Aligned Large Language Models
Figure 3 for Imposter.AI: Adversarial Attacks with Hidden Intentions towards Aligned Large Language Models
Figure 4 for Imposter.AI: Adversarial Attacks with Hidden Intentions towards Aligned Large Language Models
Viaarxiv icon

Can multiple-choice questions really be useful in detecting the abilities of LLMs?

Add code
Mar 28, 2024
Figure 1 for Can multiple-choice questions really be useful in detecting the abilities of LLMs?
Figure 2 for Can multiple-choice questions really be useful in detecting the abilities of LLMs?
Figure 3 for Can multiple-choice questions really be useful in detecting the abilities of LLMs?
Figure 4 for Can multiple-choice questions really be useful in detecting the abilities of LLMs?
Viaarxiv icon

CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care

Add code
Jul 04, 2023
Figure 1 for CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care
Figure 2 for CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care
Figure 3 for CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care
Figure 4 for CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care
Viaarxiv icon