Picture for Xiaoye Qu

Xiaoye Qu

LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training

Add code
Nov 24, 2024
Viaarxiv icon

CLIP-MoE: Towards Building Mixture of Experts for CLIP with Diversified Multiplet Upcycling

Add code
Sep 28, 2024
Viaarxiv icon

SURf: Teaching Large Vision-Language Models to Selectively Utilize Retrieved Information

Add code
Sep 21, 2024
Viaarxiv icon

Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning

Add code
Aug 30, 2024
Viaarxiv icon

ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM

Add code
Aug 22, 2024
Viaarxiv icon

Mitigating Multilingual Hallucination in Large Vision-Language Models

Add code
Aug 01, 2024
Figure 1 for Mitigating Multilingual Hallucination in Large Vision-Language Models
Figure 2 for Mitigating Multilingual Hallucination in Large Vision-Language Models
Figure 3 for Mitigating Multilingual Hallucination in Large Vision-Language Models
Figure 4 for Mitigating Multilingual Hallucination in Large Vision-Language Models
Viaarxiv icon

Alleviating Hallucination in Large Vision-Language Models with Active Retrieval Augmentation

Add code
Aug 01, 2024
Figure 1 for Alleviating Hallucination in Large Vision-Language Models with Active Retrieval Augmentation
Figure 2 for Alleviating Hallucination in Large Vision-Language Models with Active Retrieval Augmentation
Figure 3 for Alleviating Hallucination in Large Vision-Language Models with Active Retrieval Augmentation
Figure 4 for Alleviating Hallucination in Large Vision-Language Models with Active Retrieval Augmentation
Viaarxiv icon

A Survey of Attacks on Large Vision-Language Models: Resources, Advances, and Future Trends

Add code
Jul 10, 2024
Viaarxiv icon

LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training

Add code
Jun 24, 2024
Viaarxiv icon

Timo: Towards Better Temporal Reasoning for Language Models

Add code
Jun 20, 2024
Viaarxiv icon