Picture for Xinlei He

Xinlei He

Backdoor Attack on Vision Language Models with Stealthy Semantic Manipulation

Add code
Jun 08, 2025
Viaarxiv icon

Evaluation Hallucination in Multi-Round Incomplete Information Lateral-Driven Reasoning Tasks

Add code
May 28, 2025
Viaarxiv icon

JALMBench: Benchmarking Jailbreak Vulnerabilities in Audio Language Models

Add code
May 23, 2025
Viaarxiv icon

FragFake: A Dataset for Fine-Grained Detection of Edited Images with Vision Language Models

Add code
May 21, 2025
Viaarxiv icon

RePPL: Recalibrating Perplexity by Uncertainty in Semantic Propagation and Language Generation for Explainable QA Hallucination Detection

Add code
May 21, 2025
Viaarxiv icon

An Empirical Study of the Anchoring Effect in LLMs: Existence, Mechanism, and Potential Mitigations

Add code
May 21, 2025
Viaarxiv icon

GUARD: Generation-time LLM Unlearning via Adaptive Restriction and Detection

Add code
May 19, 2025
Viaarxiv icon

"I Can See Forever!": Evaluating Real-time VideoLLMs for Assisting Individuals with Visual Impairments

Add code
May 07, 2025
Viaarxiv icon

Holmes: Automated Fact Check with Large Language Models

Add code
May 06, 2025
Viaarxiv icon

Humanizing LLMs: A Survey of Psychological Measurements with Tools, Datasets, and Human-Agent Applications

Add code
Apr 30, 2025
Viaarxiv icon