Picture for Jinan Xu

Jinan Xu

KDFlow: A User-Friendly and Efficient Knowledge Distillation Framework for Large Language Models

Add code
Mar 02, 2026
Viaarxiv icon

Imagination Helps Visual Reasoning, But Not Yet in Latent Space

Add code
Feb 26, 2026
Viaarxiv icon

Language-Coupled Reinforcement Learning for Multilingual Retrieval-Augmented Generation

Add code
Jan 21, 2026
Viaarxiv icon

When Helpers Become Hazards: A Benchmark for Analyzing Multimodal LLM-Powered Safety in Daily Life

Add code
Jan 07, 2026
Viaarxiv icon

Think Natively: Unlocking Multilingual Reasoning with Consistency-Enhanced Reinforcement Learning

Add code
Oct 08, 2025
Figure 1 for Think Natively: Unlocking Multilingual Reasoning with Consistency-Enhanced Reinforcement Learning
Figure 2 for Think Natively: Unlocking Multilingual Reasoning with Consistency-Enhanced Reinforcement Learning
Figure 3 for Think Natively: Unlocking Multilingual Reasoning with Consistency-Enhanced Reinforcement Learning
Figure 4 for Think Natively: Unlocking Multilingual Reasoning with Consistency-Enhanced Reinforcement Learning
Viaarxiv icon

Boosting Data Utilization for Multilingual Dense Retrieval

Add code
Sep 11, 2025
Viaarxiv icon

CM-Align: Consistency-based Multilingual Alignment for Large Language Models

Add code
Sep 10, 2025
Viaarxiv icon

Less, but Better: Efficient Multilingual Expansion for LLMs via Layer-wise Mixture-of-Experts

Add code
May 28, 2025
Viaarxiv icon

Multilingual Collaborative Defense for Large Language Models

Add code
May 17, 2025
Viaarxiv icon

Think in Safety: Unveiling and Mitigating Safety Alignment Collapse in Multimodal Large Reasoning Model

Add code
May 10, 2025
Figure 1 for Think in Safety: Unveiling and Mitigating Safety Alignment Collapse in Multimodal Large Reasoning Model
Figure 2 for Think in Safety: Unveiling and Mitigating Safety Alignment Collapse in Multimodal Large Reasoning Model
Figure 3 for Think in Safety: Unveiling and Mitigating Safety Alignment Collapse in Multimodal Large Reasoning Model
Figure 4 for Think in Safety: Unveiling and Mitigating Safety Alignment Collapse in Multimodal Large Reasoning Model
Viaarxiv icon