Picture for Zhaohan Xi

Zhaohan Xi

POLAR: Automating Cyber Threat Prioritization through LLM-Powered Assessment

Add code
Oct 02, 2025
Figure 1 for POLAR: Automating Cyber Threat Prioritization through LLM-Powered Assessment
Figure 2 for POLAR: Automating Cyber Threat Prioritization through LLM-Powered Assessment
Figure 3 for POLAR: Automating Cyber Threat Prioritization through LLM-Powered Assessment
Figure 4 for POLAR: Automating Cyber Threat Prioritization through LLM-Powered Assessment
Viaarxiv icon

Empowering Clinical Trial Design through AI: A Randomized Evaluation of PowerGPT

Add code
Sep 15, 2025
Viaarxiv icon

On the Eligibility of LLMs for Counterfactual Reasoning: A Decompositional Study

Add code
May 17, 2025
Viaarxiv icon

Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation

Add code
Oct 03, 2024
Figure 1 for Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
Figure 2 for Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
Figure 3 for Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
Figure 4 for Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
Viaarxiv icon

Zodiac: A Cardiologist-Level LLM Framework for Multi-Agent Diagnostics

Add code
Oct 02, 2024
Viaarxiv icon

PromptFix: Few-shot Backdoor Removal via Adversarial Prompt Tuning

Add code
Jun 06, 2024
Viaarxiv icon

Robustifying Safety-Aligned Large Language Models through Clean Data Curation

Add code
May 31, 2024
Figure 1 for Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Figure 2 for Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Figure 3 for Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Figure 4 for Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Viaarxiv icon

On the Difficulty of Defending Contrastive Learning against Backdoor Attacks

Add code
Dec 14, 2023
Figure 1 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Figure 2 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Figure 3 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Figure 4 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Viaarxiv icon

Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks

Add code
Sep 23, 2023
Figure 1 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Figure 2 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Figure 3 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Figure 4 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Viaarxiv icon

On the Security Risks of Knowledge Graph Reasoning

Add code
May 03, 2023
Figure 1 for On the Security Risks of Knowledge Graph Reasoning
Figure 2 for On the Security Risks of Knowledge Graph Reasoning
Figure 3 for On the Security Risks of Knowledge Graph Reasoning
Figure 4 for On the Security Risks of Knowledge Graph Reasoning
Viaarxiv icon