Picture for Kehai Chen

Kehai Chen

Character-R1: Enhancing Role-Aware Reasoning in Role-Playing Agents via RLVR

Add code
Jan 08, 2026
Viaarxiv icon

From Perception to Reasoning: Deep Thinking Empowers Multimodal Large Language Models

Add code
Nov 18, 2025
Viaarxiv icon

LoCoT2V-Bench: A Benchmark for Long-Form and Complex Text-to-Video Generation

Add code
Oct 30, 2025
Viaarxiv icon

From Bias to Balance: Exploring and Mitigating Spatial Bias in LVLMs

Add code
Sep 26, 2025
Figure 1 for From Bias to Balance: Exploring and Mitigating Spatial Bias in LVLMs
Figure 2 for From Bias to Balance: Exploring and Mitigating Spatial Bias in LVLMs
Figure 3 for From Bias to Balance: Exploring and Mitigating Spatial Bias in LVLMs
Figure 4 for From Bias to Balance: Exploring and Mitigating Spatial Bias in LVLMs
Viaarxiv icon

XBOUND: Exploring the Capability Boundaries of Device-Control Agents through Trajectory Tree Exploration

Add code
May 27, 2025
Viaarxiv icon

Evaluating and Steering Modality Preferences in Multimodal Large Language Model

Add code
May 27, 2025
Viaarxiv icon

MDIT-Bench: Evaluating the Dual-Implicit Toxicity in Large Multimodal Models

Add code
May 22, 2025
Viaarxiv icon

Lost in Benchmarks? Rethinking Large Language Model Benchmarking with Item Response Theory

Add code
May 21, 2025
Figure 1 for Lost in Benchmarks? Rethinking Large Language Model Benchmarking with Item Response Theory
Figure 2 for Lost in Benchmarks? Rethinking Large Language Model Benchmarking with Item Response Theory
Figure 3 for Lost in Benchmarks? Rethinking Large Language Model Benchmarking with Item Response Theory
Figure 4 for Lost in Benchmarks? Rethinking Large Language Model Benchmarking with Item Response Theory
Viaarxiv icon

MoK-RAG: Mixture of Knowledge Paths Enhanced Retrieval-Augmented Generation for Embodied AI Environments

Add code
Mar 18, 2025
Viaarxiv icon

Representation-based Reward Modeling for Efficient Safety Alignment of Large Language Model

Add code
Mar 13, 2025
Figure 1 for Representation-based Reward Modeling for Efficient Safety Alignment of Large Language Model
Figure 2 for Representation-based Reward Modeling for Efficient Safety Alignment of Large Language Model
Figure 3 for Representation-based Reward Modeling for Efficient Safety Alignment of Large Language Model
Figure 4 for Representation-based Reward Modeling for Efficient Safety Alignment of Large Language Model
Viaarxiv icon