Picture for Jingxuan Wei

Jingxuan Wei

Efficient-Empathy: Towards Efficient and Effective Selection of Empathy Data

Add code
Jul 02, 2024
Figure 1 for Efficient-Empathy: Towards Efficient and Effective Selection of Empathy Data
Figure 2 for Efficient-Empathy: Towards Efficient and Effective Selection of Empathy Data
Figure 3 for Efficient-Empathy: Towards Efficient and Effective Selection of Empathy Data
Figure 4 for Efficient-Empathy: Towards Efficient and Effective Selection of Empathy Data
Viaarxiv icon

Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions

Add code
Jun 09, 2024
Viaarxiv icon

Retrieval Meets Reasoning: Even High-school Textbook Knowledge Benefits Multimodal Reasoning

Add code
May 31, 2024
Figure 1 for Retrieval Meets Reasoning: Even High-school Textbook Knowledge Benefits Multimodal Reasoning
Figure 2 for Retrieval Meets Reasoning: Even High-school Textbook Knowledge Benefits Multimodal Reasoning
Figure 3 for Retrieval Meets Reasoning: Even High-school Textbook Knowledge Benefits Multimodal Reasoning
Figure 4 for Retrieval Meets Reasoning: Even High-school Textbook Knowledge Benefits Multimodal Reasoning
Viaarxiv icon

Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation

Add code
Apr 23, 2024
Figure 1 for Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation
Figure 2 for Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation
Figure 3 for Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation
Figure 4 for Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation
Viaarxiv icon

mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning

Add code
Apr 02, 2024
Figure 1 for mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning
Figure 2 for mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning
Figure 3 for mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning
Figure 4 for mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning
Viaarxiv icon

Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory

Add code
Jan 02, 2024
Figure 1 for Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory
Figure 2 for Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory
Figure 3 for Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory
Figure 4 for Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory
Viaarxiv icon

Unraveling Key Factors of Knowledge Distillation

Add code
Dec 24, 2023
Viaarxiv icon

Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training

Add code
Nov 23, 2023
Viaarxiv icon

A Survey on Image-text Multimodal Models

Add code
Sep 23, 2023
Viaarxiv icon

Enhancing Human-like Multi-Modal Reasoning: A New Challenging Dataset and Comprehensive Framework

Add code
Jul 24, 2023
Figure 1 for Enhancing Human-like Multi-Modal Reasoning: A New Challenging Dataset and Comprehensive Framework
Figure 2 for Enhancing Human-like Multi-Modal Reasoning: A New Challenging Dataset and Comprehensive Framework
Figure 3 for Enhancing Human-like Multi-Modal Reasoning: A New Challenging Dataset and Comprehensive Framework
Figure 4 for Enhancing Human-like Multi-Modal Reasoning: A New Challenging Dataset and Comprehensive Framework
Viaarxiv icon