Picture for Kang Zhu

Kang Zhu

PopAlign: Diversifying Contrasting Patterns for a More Comprehensive Alignment

Add code
Oct 17, 2024
Viaarxiv icon

LIME-M: Less Is More for Evaluation of MLLMs

Add code
Sep 10, 2024
Figure 1 for LIME-M: Less Is More for Evaluation of MLLMs
Figure 2 for LIME-M: Less Is More for Evaluation of MLLMs
Figure 3 for LIME-M: Less Is More for Evaluation of MLLMs
Figure 4 for LIME-M: Less Is More for Evaluation of MLLMs
Viaarxiv icon

MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models

Add code
Aug 06, 2024
Figure 1 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 2 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 3 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Figure 4 for MMRA: A Benchmark for Evaluating Multi-Granularity and Multi-Image Relational Association Capabilities in Large Visual Language Models
Viaarxiv icon

MMRA: A Benchmark for Multi-granularity Multi-image Relational Association

Add code
Jul 24, 2024
Figure 1 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 2 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 3 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Figure 4 for MMRA: A Benchmark for Multi-granularity Multi-image Relational Association
Viaarxiv icon

MDPE: A Multimodal Deception Dataset with Personality and Emotional Characteristics

Add code
Jul 17, 2024
Viaarxiv icon

MetaDesigner: Advancing Artistic Typography through AI-Driven, User-Centric, and Multilingual WordArt Synthesis

Add code
Jun 28, 2024
Viaarxiv icon

PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents

Add code
Jun 20, 2024
Figure 1 for PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents
Figure 2 for PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents
Figure 3 for PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents
Figure 4 for PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents
Viaarxiv icon

SciMMIR: Benchmarking Scientific Multi-modal Information Retrieval

Add code
Jan 24, 2024
Viaarxiv icon

CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark

Add code
Jan 22, 2024
Viaarxiv icon

Multi-perspective Information Fusion Res2Net with RandomSpecmix for Fake Speech Detection

Add code
Jun 27, 2023
Viaarxiv icon