Picture for Zekai Ye

Zekai Ye

CAST: Mitigating Object Hallucination in Large Vision-Language Models via Caption-Guided Visual Attention Steering

Add code
May 06, 2026
Viaarxiv icon

Not All Tokens See Equally: Perception-Grounded Policy Optimization for Large Vision-Language Models

Add code
Apr 02, 2026
Viaarxiv icon

Causal Tracing of Object Representations in Large Vision Language Models: Mechanistic Interpretability and Hallucination Mitigation

Add code
Nov 19, 2025
Figure 1 for Causal Tracing of Object Representations in Large Vision Language Models: Mechanistic Interpretability and Hallucination Mitigation
Figure 2 for Causal Tracing of Object Representations in Large Vision Language Models: Mechanistic Interpretability and Hallucination Mitigation
Figure 3 for Causal Tracing of Object Representations in Large Vision Language Models: Mechanistic Interpretability and Hallucination Mitigation
Figure 4 for Causal Tracing of Object Representations in Large Vision Language Models: Mechanistic Interpretability and Hallucination Mitigation
Viaarxiv icon