Alert button
Picture for Arjun Arunasalam

Arjun Arunasalam

Alert button

Rethinking How to Evaluate Language Model Jailbreak

Add code
Bookmark button
Alert button
Apr 12, 2024
Hongyu Cai, Arjun Arunasalam, Leo Y. Lin, Antonio Bianchi, Z. Berkay Celik

Viaarxiv icon

Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions

Add code
Bookmark button
Alert button
Oct 03, 2023
Yufan Chen, Arjun Arunasalam, Z. Berkay Celik

Figure 1 for Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions
Figure 2 for Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions
Figure 3 for Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions
Figure 4 for Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions
Viaarxiv icon