Picture for Kai Chen

Kai Chen

Tony

Information Density Principle for MLLM Benchmarks

Add code
Mar 13, 2025
Viaarxiv icon

GenDR: Lightning Generative Detail Restorator

Add code
Mar 09, 2025
Viaarxiv icon

Mask-DPO: Generalizable Fine-grained Factuality Alignment of LLMs

Add code
Mar 04, 2025
Viaarxiv icon

Sparse Meets Dense: Unified Generative Recommendations with Cascaded Sparse-Dense Representations

Add code
Mar 04, 2025
Figure 1 for Sparse Meets Dense: Unified Generative Recommendations with Cascaded Sparse-Dense Representations
Figure 2 for Sparse Meets Dense: Unified Generative Recommendations with Cascaded Sparse-Dense Representations
Figure 3 for Sparse Meets Dense: Unified Generative Recommendations with Cascaded Sparse-Dense Representations
Figure 4 for Sparse Meets Dense: Unified Generative Recommendations with Cascaded Sparse-Dense Representations
Viaarxiv icon

Interactive Navigation for Legged Manipulators with Learned Arm-Pushing Controller

Add code
Mar 03, 2025
Figure 1 for Interactive Navigation for Legged Manipulators with Learned Arm-Pushing Controller
Figure 2 for Interactive Navigation for Legged Manipulators with Learned Arm-Pushing Controller
Figure 3 for Interactive Navigation for Legged Manipulators with Learned Arm-Pushing Controller
Figure 4 for Interactive Navigation for Legged Manipulators with Learned Arm-Pushing Controller
Viaarxiv icon

CritiQ: Mining Data Quality Criteria from Human Preferences

Add code
Feb 26, 2025
Viaarxiv icon

OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference

Add code
Feb 25, 2025
Viaarxiv icon

PPC-GPT: Federated Task-Specific Compression of Large Language Models via Pruning and Chain-of-Thought Distillation

Add code
Feb 21, 2025
Figure 1 for PPC-GPT: Federated Task-Specific Compression of Large Language Models via Pruning and Chain-of-Thought Distillation
Figure 2 for PPC-GPT: Federated Task-Specific Compression of Large Language Models via Pruning and Chain-of-Thought Distillation
Figure 3 for PPC-GPT: Federated Task-Specific Compression of Large Language Models via Pruning and Chain-of-Thought Distillation
Figure 4 for PPC-GPT: Federated Task-Specific Compression of Large Language Models via Pruning and Chain-of-Thought Distillation
Viaarxiv icon

DH-RAG: A Dynamic Historical Context-Powered Retrieval-Augmented Generation Method for Multi-Turn Dialogue

Add code
Feb 19, 2025
Viaarxiv icon

Corrupted but Not Broken: Rethinking the Impact of Corrupted Data in Visual Instruction Tuning

Add code
Feb 18, 2025
Viaarxiv icon