Picture for Jingxuan Wei

Jingxuan Wei

ChartMind: A Comprehensive Benchmark for Complex Real-world Multimodal Chart Question Answering

Add code
May 29, 2025
Viaarxiv icon

MM-Verify: Enhancing Multimodal Reasoning with Chain-of-Thought Verification

Add code
Feb 19, 2025
Viaarxiv icon

Synth-Empathy: Towards High-Quality Synthetic Empathy Data

Add code
Jul 31, 2024
Viaarxiv icon

Efficient-Empathy: Towards Efficient and Effective Selection of Empathy Data

Add code
Jul 02, 2024
Viaarxiv icon

Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions

Add code
Jun 09, 2024
Viaarxiv icon

Retrieval Meets Reasoning: Even High-school Textbook Knowledge Benefits Multimodal Reasoning

Add code
May 31, 2024
Viaarxiv icon

Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation

Add code
Apr 23, 2024
Viaarxiv icon

mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning

Add code
Apr 02, 2024
Figure 1 for mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning
Figure 2 for mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning
Figure 3 for mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning
Figure 4 for mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning
Viaarxiv icon

Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory

Add code
Jan 02, 2024
Viaarxiv icon

Unraveling Key Factors of Knowledge Distillation

Add code
Dec 24, 2023
Viaarxiv icon