Picture for Zhaohan Xi

Zhaohan Xi

SafeGPT: Preventing Data Leakage and Unethical Outputs in Enterprise LLM Use

Add code
Jan 10, 2026
Viaarxiv icon

Semantic NLP Pipelines for Interoperable Patient Digital Twins from Unstructured EHRs

Add code
Jan 09, 2026
Viaarxiv icon

Smart Privacy Policy Assistant: An LLM-Powered System for Transparent and Actionable Privacy Notices

Add code
Jan 09, 2026
Viaarxiv icon

POLAR: Automating Cyber Threat Prioritization through LLM-Powered Assessment

Add code
Oct 02, 2025
Figure 1 for POLAR: Automating Cyber Threat Prioritization through LLM-Powered Assessment
Figure 2 for POLAR: Automating Cyber Threat Prioritization through LLM-Powered Assessment
Figure 3 for POLAR: Automating Cyber Threat Prioritization through LLM-Powered Assessment
Figure 4 for POLAR: Automating Cyber Threat Prioritization through LLM-Powered Assessment
Viaarxiv icon

Empowering Clinical Trial Design through AI: A Randomized Evaluation of PowerGPT

Add code
Sep 15, 2025
Figure 1 for Empowering Clinical Trial Design through AI: A Randomized Evaluation of PowerGPT
Figure 2 for Empowering Clinical Trial Design through AI: A Randomized Evaluation of PowerGPT
Figure 3 for Empowering Clinical Trial Design through AI: A Randomized Evaluation of PowerGPT
Figure 4 for Empowering Clinical Trial Design through AI: A Randomized Evaluation of PowerGPT
Viaarxiv icon

On the Eligibility of LLMs for Counterfactual Reasoning: A Decompositional Study

Add code
May 17, 2025
Figure 1 for On the Eligibility of LLMs for Counterfactual Reasoning: A Decompositional Study
Figure 2 for On the Eligibility of LLMs for Counterfactual Reasoning: A Decompositional Study
Figure 3 for On the Eligibility of LLMs for Counterfactual Reasoning: A Decompositional Study
Figure 4 for On the Eligibility of LLMs for Counterfactual Reasoning: A Decompositional Study
Viaarxiv icon

Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation

Add code
Oct 03, 2024
Figure 1 for Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
Figure 2 for Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
Figure 3 for Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
Figure 4 for Buckle Up: Robustifying LLMs at Every Customization Stage via Data Curation
Viaarxiv icon

Zodiac: A Cardiologist-Level LLM Framework for Multi-Agent Diagnostics

Add code
Oct 02, 2024
Viaarxiv icon

PromptFix: Few-shot Backdoor Removal via Adversarial Prompt Tuning

Add code
Jun 06, 2024
Viaarxiv icon

Robustifying Safety-Aligned Large Language Models through Clean Data Curation

Add code
May 31, 2024
Figure 1 for Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Figure 2 for Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Figure 3 for Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Figure 4 for Robustifying Safety-Aligned Large Language Models through Clean Data Curation
Viaarxiv icon