Picture for Fengyun Rao

Fengyun Rao

WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning

Add code
Jun 09, 2025
Viaarxiv icon

Instruction-augmented Multimodal Alignment for Image-Text and Element Matching

Add code
Apr 16, 2025
Viaarxiv icon

Instruction-Oriented Preference Alignment for Enhancing Multi-Modal Comprehension Capability of MLLMs

Add code
Mar 26, 2025
Viaarxiv icon

From Trial to Triumph: Advancing Long Video Understanding via Visual Context Sample Scaling and Self-reward Alignment

Add code
Mar 26, 2025
Viaarxiv icon

R1-Onevision: Advancing Generalized Multimodal Reasoning through Cross-Modal Formalization

Add code
Mar 13, 2025
Viaarxiv icon

PerturboLLaVA: Reducing Multimodal Hallucinations with Perturbative Visual Training

Add code
Mar 09, 2025
Viaarxiv icon

HarmonySet: A Comprehensive Dataset for Understanding Video-Music Semantic Alignment and Temporal Synchronization

Add code
Mar 04, 2025
Viaarxiv icon

Number it: Temporal Grounding Videos like Flipping Manga

Add code
Nov 15, 2024
Viaarxiv icon

MMAR: Towards Lossless Multi-Modal Auto-Regressive Probabilistic Modeling

Add code
Oct 15, 2024
Viaarxiv icon

EE-MLLM: A Data-Efficient and Compute-Efficient Multimodal Large Language Model

Add code
Aug 21, 2024
Figure 1 for EE-MLLM: A Data-Efficient and Compute-Efficient Multimodal Large Language Model
Figure 2 for EE-MLLM: A Data-Efficient and Compute-Efficient Multimodal Large Language Model
Figure 3 for EE-MLLM: A Data-Efficient and Compute-Efficient Multimodal Large Language Model
Figure 4 for EE-MLLM: A Data-Efficient and Compute-Efficient Multimodal Large Language Model
Viaarxiv icon