Picture for Nan Xu

Nan Xu

General algorithm of assigning raster features to vector maps at any resolution or scale

Add code
Jul 15, 2024
Viaarxiv icon

From Introspection to Best Practices: Principled Analysis of Demonstrations in Multimodal In-Context Learning

Add code
Jul 01, 2024
Viaarxiv icon

Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA

Add code
Jun 25, 2024
Figure 1 for Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA
Figure 2 for Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA
Figure 3 for Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA
Figure 4 for Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA
Viaarxiv icon

mDPO: Conditional Preference Optimization for Multimodal Large Language Models

Add code
Jun 17, 2024
Figure 1 for mDPO: Conditional Preference Optimization for Multimodal Large Language Models
Figure 2 for mDPO: Conditional Preference Optimization for Multimodal Large Language Models
Figure 3 for mDPO: Conditional Preference Optimization for Multimodal Large Language Models
Figure 4 for mDPO: Conditional Preference Optimization for Multimodal Large Language Models
Viaarxiv icon

3D-RPE: Enhancing Long-Context Modeling Through 3D Rotary Position Encoding

Add code
Jun 14, 2024
Viaarxiv icon

MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding

Add code
Jun 13, 2024
Figure 1 for MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding
Figure 2 for MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding
Figure 3 for MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding
Figure 4 for MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding
Viaarxiv icon

mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning

Add code
Apr 02, 2024
Figure 1 for mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning
Figure 2 for mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning
Figure 3 for mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning
Figure 4 for mChartQA: A universal benchmark for multimodal Chart Question Answer based on Vision-Language Alignment and Reasoning
Viaarxiv icon

Monotonic Paraphrasing Improves Generalization of Language Model Prompting

Add code
Mar 24, 2024
Figure 1 for Monotonic Paraphrasing Improves Generalization of Language Model Prompting
Figure 2 for Monotonic Paraphrasing Improves Generalization of Language Model Prompting
Figure 3 for Monotonic Paraphrasing Improves Generalization of Language Model Prompting
Figure 4 for Monotonic Paraphrasing Improves Generalization of Language Model Prompting
Viaarxiv icon

YAYI-UIE: A Chat-Enhanced Instruction Tuning Framework for Universal Information Extraction

Add code
Jan 08, 2024
Figure 1 for YAYI-UIE: A Chat-Enhanced Instruction Tuning Framework for Universal Information Extraction
Figure 2 for YAYI-UIE: A Chat-Enhanced Instruction Tuning Framework for Universal Information Extraction
Figure 3 for YAYI-UIE: A Chat-Enhanced Instruction Tuning Framework for Universal Information Extraction
Figure 4 for YAYI-UIE: A Chat-Enhanced Instruction Tuning Framework for Universal Information Extraction
Viaarxiv icon

Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory

Add code
Jan 02, 2024
Figure 1 for Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory
Figure 2 for Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory
Figure 3 for Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory
Figure 4 for Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory
Viaarxiv icon