Picture for Wenpeng Lu

Wenpeng Lu

Dynamic Visual-semantic Alignment for Zero-shot Learning with Ambiguous Labels

Add code
Apr 20, 2026
Viaarxiv icon

How Do LLMs and VLMs Understand Viewpoint Rotation Without Vision? An Interpretability Study

Add code
Apr 16, 2026
Viaarxiv icon

ContiGuard: A Framework for Continual Toxicity Detection Against Evolving Evasive Perturbations

Add code
Mar 16, 2026
Viaarxiv icon

CLIP-driven Zero-shot Learning with Ambiguous Labels

Add code
Mar 05, 2026
Viaarxiv icon

BIOME-Bench: A Benchmark for Biomolecular Interaction Inference and Multi-Omics Pathway Mechanism Elucidation from Scientific Literature

Add code
Dec 31, 2025
Viaarxiv icon

A Survey on Training-free Alignment of Large Language Models

Add code
Aug 12, 2025
Viaarxiv icon

CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language Models

Add code
May 25, 2025
Figure 1 for CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language Models
Figure 2 for CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language Models
Figure 3 for CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language Models
Figure 4 for CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language Models
Viaarxiv icon

SRLCG: Self-Rectified Large-Scale Code Generation with Multidimensional Chain-of-Thought and Dynamic Backtracking

Add code
Apr 01, 2025
Viaarxiv icon

BianCang: A Traditional Chinese Medicine Large Language Model

Add code
Nov 17, 2024
Figure 1 for BianCang: A Traditional Chinese Medicine Large Language Model
Figure 2 for BianCang: A Traditional Chinese Medicine Large Language Model
Figure 3 for BianCang: A Traditional Chinese Medicine Large Language Model
Figure 4 for BianCang: A Traditional Chinese Medicine Large Language Model
Viaarxiv icon

PMoL: Parameter Efficient MoE for Preference Mixing of LLM Alignment

Add code
Nov 02, 2024
Figure 1 for PMoL: Parameter Efficient MoE for Preference Mixing of LLM Alignment
Figure 2 for PMoL: Parameter Efficient MoE for Preference Mixing of LLM Alignment
Figure 3 for PMoL: Parameter Efficient MoE for Preference Mixing of LLM Alignment
Figure 4 for PMoL: Parameter Efficient MoE for Preference Mixing of LLM Alignment
Viaarxiv icon