Picture for Qian Lou

Qian Lou

Factuality Beyond Coherence: Evaluating LLM Watermarking Methods for Medical Texts

Add code
Sep 09, 2025
Viaarxiv icon

TFHE-Coder: Evaluating LLM-agentic Fully Homomorphic Encryption Code Generation

Add code
Mar 15, 2025
Figure 1 for TFHE-Coder: Evaluating LLM-agentic Fully Homomorphic Encryption Code Generation
Figure 2 for TFHE-Coder: Evaluating LLM-agentic Fully Homomorphic Encryption Code Generation
Figure 3 for TFHE-Coder: Evaluating LLM-agentic Fully Homomorphic Encryption Code Generation
Figure 4 for TFHE-Coder: Evaluating LLM-agentic Fully Homomorphic Encryption Code Generation
Viaarxiv icon

CipherPrune: Efficient and Scalable Private Transformer Inference

Add code
Feb 24, 2025
Figure 1 for CipherPrune: Efficient and Scalable Private Transformer Inference
Figure 2 for CipherPrune: Efficient and Scalable Private Transformer Inference
Figure 3 for CipherPrune: Efficient and Scalable Private Transformer Inference
Figure 4 for CipherPrune: Efficient and Scalable Private Transformer Inference
Viaarxiv icon

Uncovering the Hidden Threat of Text Watermarking from Users with Cross-Lingual Knowledge

Add code
Feb 23, 2025
Figure 1 for Uncovering the Hidden Threat of Text Watermarking from Users with Cross-Lingual Knowledge
Figure 2 for Uncovering the Hidden Threat of Text Watermarking from Users with Cross-Lingual Knowledge
Figure 3 for Uncovering the Hidden Threat of Text Watermarking from Users with Cross-Lingual Knowledge
Figure 4 for Uncovering the Hidden Threat of Text Watermarking from Users with Cross-Lingual Knowledge
Viaarxiv icon

Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare

Add code
Jan 27, 2025
Figure 1 for Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare
Figure 2 for Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare
Figure 3 for Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare
Figure 4 for Towards Safe AI Clinicians: A Comprehensive Study on Large Language Model Jailbreaking in Healthcare
Viaarxiv icon

freePruner: A Training-free Approach for Large Multimodal Model Acceleration

Add code
Nov 23, 2024
Figure 1 for freePruner: A Training-free Approach for Large Multimodal Model Acceleration
Figure 2 for freePruner: A Training-free Approach for Large Multimodal Model Acceleration
Figure 3 for freePruner: A Training-free Approach for Large Multimodal Model Acceleration
Figure 4 for freePruner: A Training-free Approach for Large Multimodal Model Acceleration
Viaarxiv icon

BadFair: Backdoored Fairness Attacks with Group-conditioned Triggers

Add code
Oct 23, 2024
Viaarxiv icon

CryptoTrain: Fast Secure Training on Encrypted Datase

Add code
Sep 25, 2024
Figure 1 for CryptoTrain: Fast Secure Training on Encrypted Datase
Figure 2 for CryptoTrain: Fast Secure Training on Encrypted Datase
Figure 3 for CryptoTrain: Fast Secure Training on Encrypted Datase
Figure 4 for CryptoTrain: Fast Secure Training on Encrypted Datase
Viaarxiv icon

Jailbreaking LLMs with Arabic Transliteration and Arabizi

Add code
Jun 26, 2024
Viaarxiv icon

CR-UTP: Certified Robustness against Universal Text Perturbations

Add code
Jun 04, 2024
Figure 1 for CR-UTP: Certified Robustness against Universal Text Perturbations
Figure 2 for CR-UTP: Certified Robustness against Universal Text Perturbations
Figure 3 for CR-UTP: Certified Robustness against Universal Text Perturbations
Figure 4 for CR-UTP: Certified Robustness against Universal Text Perturbations
Viaarxiv icon