Picture for Qixin Sun

Qixin Sun

LLaVA-CMoE: Towards Continual Mixture of Experts for Large Vision-Language Models

Add code
Mar 27, 2025
Figure 1 for LLaVA-CMoE: Towards Continual Mixture of Experts for Large Vision-Language Models
Figure 2 for LLaVA-CMoE: Towards Continual Mixture of Experts for Large Vision-Language Models
Figure 3 for LLaVA-CMoE: Towards Continual Mixture of Experts for Large Vision-Language Models
Figure 4 for LLaVA-CMoE: Towards Continual Mixture of Experts for Large Vision-Language Models
Viaarxiv icon

MIH-TCCT: Mitigating Inconsistent Hallucinations in LLMs via Event-Driven Text-Code Cyclic Training

Add code
Feb 13, 2025
Viaarxiv icon